00:00:00.001 Started by upstream project "autotest-per-patch" build number 132381 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.110 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.228 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.838 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.849 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.863 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.863 > git config core.sparsecheckout # timeout=10 00:00:06.873 > git read-tree -mu HEAD # timeout=10 00:00:06.890 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.911 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.911 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.010 [Pipeline] Start of Pipeline 00:00:07.021 [Pipeline] library 00:00:07.022 Loading library shm_lib@master 00:00:07.022 Library shm_lib@master is cached. Copying from home. 00:00:07.037 [Pipeline] node 00:00:07.044 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:07.046 [Pipeline] { 00:00:07.056 [Pipeline] catchError 00:00:07.057 [Pipeline] { 00:00:07.067 [Pipeline] wrap 00:00:07.074 [Pipeline] { 00:00:07.079 [Pipeline] stage 00:00:07.080 [Pipeline] { (Prologue) 00:00:07.095 [Pipeline] echo 00:00:07.096 Node: VM-host-WFP7 00:00:07.101 [Pipeline] cleanWs 00:00:07.109 [WS-CLEANUP] Deleting project workspace... 00:00:07.109 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.118 [WS-CLEANUP] done 00:00:07.317 [Pipeline] setCustomBuildProperty 00:00:07.388 [Pipeline] httpRequest 00:00:08.060 [Pipeline] echo 00:00:08.061 Sorcerer 10.211.164.20 is alive 00:00:08.071 [Pipeline] retry 00:00:08.073 [Pipeline] { 00:00:08.087 [Pipeline] httpRequest 00:00:08.092 HttpMethod: GET 00:00:08.093 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.094 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.105 Response Code: HTTP/1.1 200 OK 00:00:08.106 Success: Status code 200 is in the accepted range: 200,404 00:00:08.107 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.239 [Pipeline] } 00:00:09.255 [Pipeline] // retry 00:00:09.263 [Pipeline] sh 00:00:09.568 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.582 [Pipeline] httpRequest 00:00:09.912 [Pipeline] echo 00:00:09.913 Sorcerer 10.211.164.20 is alive 00:00:09.921 [Pipeline] retry 00:00:09.922 [Pipeline] { 00:00:09.937 [Pipeline] httpRequest 00:00:09.941 HttpMethod: GET 00:00:09.941 URL: http://10.211.164.20/packages/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:00:09.942 Sending request to url: http://10.211.164.20/packages/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:00:09.957 Response Code: HTTP/1.1 200 OK 00:00:09.957 Success: Status code 200 is in the accepted range: 200,404 00:00:09.958 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:01:01.773 [Pipeline] } 00:01:01.789 [Pipeline] // retry 00:01:01.797 [Pipeline] sh 00:01:02.081 + tar --no-same-owner -xf spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:01:04.647 [Pipeline] sh 00:01:04.959 + git -C spdk log --oneline -n5 00:01:04.959 097badaeb test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:01:04.959 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:01:04.959 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:01:04.959 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:01:04.959 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:01:04.978 [Pipeline] writeFile 00:01:04.994 [Pipeline] sh 00:01:05.279 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:05.291 [Pipeline] sh 00:01:05.576 + cat autorun-spdk.conf 00:01:05.576 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.576 SPDK_RUN_ASAN=1 00:01:05.576 SPDK_RUN_UBSAN=1 00:01:05.576 SPDK_TEST_RAID=1 00:01:05.576 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.584 RUN_NIGHTLY=0 00:01:05.586 [Pipeline] } 00:01:05.600 [Pipeline] // stage 00:01:05.616 [Pipeline] stage 00:01:05.618 [Pipeline] { (Run VM) 00:01:05.632 [Pipeline] sh 00:01:05.915 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.915 + echo 'Start stage prepare_nvme.sh' 00:01:05.915 Start stage prepare_nvme.sh 00:01:05.915 + [[ -n 1 ]] 00:01:05.915 + disk_prefix=ex1 00:01:05.915 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:01:05.915 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:01:05.915 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:01:05.915 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.915 ++ SPDK_RUN_ASAN=1 00:01:05.915 ++ SPDK_RUN_UBSAN=1 00:01:05.915 ++ SPDK_TEST_RAID=1 00:01:05.915 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.915 ++ RUN_NIGHTLY=0 00:01:05.915 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:01:05.915 + nvme_files=() 00:01:05.915 + declare -A nvme_files 00:01:05.915 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.915 + nvme_files['nvme.img']=5G 00:01:05.915 + nvme_files['nvme-cmb.img']=5G 00:01:05.915 + nvme_files['nvme-multi0.img']=4G 00:01:05.915 + nvme_files['nvme-multi1.img']=4G 00:01:05.915 + nvme_files['nvme-multi2.img']=4G 00:01:05.915 + nvme_files['nvme-openstack.img']=8G 00:01:05.915 + nvme_files['nvme-zns.img']=5G 00:01:05.915 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.915 + (( SPDK_TEST_FTL == 1 )) 00:01:05.915 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.915 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.915 + for nvme in "${!nvme_files[@]}" 00:01:05.915 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:05.915 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.915 + for nvme in "${!nvme_files[@]}" 00:01:05.915 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:05.915 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.915 + for nvme in "${!nvme_files[@]}" 00:01:05.915 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:05.915 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:05.915 + for nvme in "${!nvme_files[@]}" 00:01:05.915 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:05.916 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.916 + for nvme in "${!nvme_files[@]}" 00:01:05.916 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:05.916 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.916 + for nvme in "${!nvme_files[@]}" 00:01:05.916 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:05.916 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.916 + for nvme in "${!nvme_files[@]}" 00:01:05.916 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:05.916 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.175 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:06.175 + echo 'End stage prepare_nvme.sh' 00:01:06.175 End stage prepare_nvme.sh 00:01:06.187 [Pipeline] sh 00:01:06.472 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:06.472 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:06.472 00:01:06.472 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:01:06.472 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:01:06.472 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:01:06.472 HELP=0 00:01:06.472 DRY_RUN=0 00:01:06.472 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:06.472 NVME_DISKS_TYPE=nvme,nvme, 00:01:06.472 NVME_AUTO_CREATE=0 00:01:06.472 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:06.472 NVME_CMB=,, 00:01:06.472 NVME_PMR=,, 00:01:06.472 NVME_ZNS=,, 00:01:06.472 NVME_MS=,, 00:01:06.472 NVME_FDP=,, 00:01:06.472 SPDK_VAGRANT_DISTRO=fedora39 00:01:06.472 SPDK_VAGRANT_VMCPU=10 00:01:06.472 SPDK_VAGRANT_VMRAM=12288 00:01:06.472 SPDK_VAGRANT_PROVIDER=libvirt 00:01:06.472 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:06.472 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:06.472 SPDK_OPENSTACK_NETWORK=0 00:01:06.472 VAGRANT_PACKAGE_BOX=0 00:01:06.472 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:06.472 FORCE_DISTRO=true 00:01:06.472 VAGRANT_BOX_VERSION= 00:01:06.472 EXTRA_VAGRANTFILES= 00:01:06.472 NIC_MODEL=virtio 00:01:06.472 00:01:06.472 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:01:06.472 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:01:09.010 Bringing machine 'default' up with 'libvirt' provider... 00:01:09.268 ==> default: Creating image (snapshot of base box volume). 00:01:09.527 ==> default: Creating domain with the following settings... 00:01:09.527 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732098252_e451e7a81b1ee661f89b 00:01:09.527 ==> default: -- Domain type: kvm 00:01:09.528 ==> default: -- Cpus: 10 00:01:09.528 ==> default: -- Feature: acpi 00:01:09.528 ==> default: -- Feature: apic 00:01:09.528 ==> default: -- Feature: pae 00:01:09.528 ==> default: -- Memory: 12288M 00:01:09.528 ==> default: -- Memory Backing: hugepages: 00:01:09.528 ==> default: -- Management MAC: 00:01:09.528 ==> default: -- Loader: 00:01:09.528 ==> default: -- Nvram: 00:01:09.528 ==> default: -- Base box: spdk/fedora39 00:01:09.528 ==> default: -- Storage pool: default 00:01:09.528 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732098252_e451e7a81b1ee661f89b.img (20G) 00:01:09.528 ==> default: -- Volume Cache: default 00:01:09.528 ==> default: -- Kernel: 00:01:09.528 ==> default: -- Initrd: 00:01:09.528 ==> default: -- Graphics Type: vnc 00:01:09.528 ==> default: -- Graphics Port: -1 00:01:09.528 ==> default: -- Graphics IP: 127.0.0.1 00:01:09.528 ==> default: -- Graphics Password: Not defined 00:01:09.528 ==> default: -- Video Type: cirrus 00:01:09.528 ==> default: -- Video VRAM: 9216 00:01:09.528 ==> default: -- Sound Type: 00:01:09.528 ==> default: -- Keymap: en-us 00:01:09.528 ==> default: -- TPM Path: 00:01:09.528 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:09.528 ==> default: -- Command line args: 00:01:09.528 ==> default: -> value=-device, 00:01:09.528 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:09.528 ==> default: -> value=-drive, 00:01:09.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:09.528 ==> default: -> value=-device, 00:01:09.528 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.528 ==> default: -> value=-device, 00:01:09.528 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:09.528 ==> default: -> value=-drive, 00:01:09.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:09.528 ==> default: -> value=-device, 00:01:09.528 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.528 ==> default: -> value=-drive, 00:01:09.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:09.528 ==> default: -> value=-device, 00:01:09.528 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.528 ==> default: -> value=-drive, 00:01:09.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:09.528 ==> default: -> value=-device, 00:01:09.528 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.528 ==> default: Creating shared folders metadata... 00:01:09.528 ==> default: Starting domain. 00:01:10.907 ==> default: Waiting for domain to get an IP address... 00:01:29.003 ==> default: Waiting for SSH to become available... 00:01:29.003 ==> default: Configuring and enabling network interfaces... 00:01:34.310 default: SSH address: 192.168.121.68:22 00:01:34.310 default: SSH username: vagrant 00:01:34.310 default: SSH auth method: private key 00:01:36.845 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.973 ==> default: Mounting SSHFS shared folder... 00:01:47.543 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:47.543 ==> default: Checking Mount.. 00:01:48.490 ==> default: Folder Successfully Mounted! 00:01:48.490 ==> default: Running provisioner: file... 00:01:49.425 default: ~/.gitconfig => .gitconfig 00:01:49.993 00:01:49.993 SUCCESS! 00:01:49.993 00:01:49.993 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:49.993 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:49.993 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:49.993 00:01:50.001 [Pipeline] } 00:01:50.014 [Pipeline] // stage 00:01:50.023 [Pipeline] dir 00:01:50.023 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:50.025 [Pipeline] { 00:01:50.035 [Pipeline] catchError 00:01:50.036 [Pipeline] { 00:01:50.048 [Pipeline] sh 00:01:50.330 + vagrant ssh-config --host vagrant 00:01:50.330 + sed -ne /^Host/,$p 00:01:50.330 + tee ssh_conf 00:01:53.646 Host vagrant 00:01:53.646 HostName 192.168.121.68 00:01:53.646 User vagrant 00:01:53.646 Port 22 00:01:53.646 UserKnownHostsFile /dev/null 00:01:53.646 StrictHostKeyChecking no 00:01:53.646 PasswordAuthentication no 00:01:53.646 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:53.646 IdentitiesOnly yes 00:01:53.646 LogLevel FATAL 00:01:53.646 ForwardAgent yes 00:01:53.646 ForwardX11 yes 00:01:53.646 00:01:53.661 [Pipeline] withEnv 00:01:53.664 [Pipeline] { 00:01:53.678 [Pipeline] sh 00:01:53.963 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:53.963 source /etc/os-release 00:01:53.963 [[ -e /image.version ]] && img=$(< /image.version) 00:01:53.963 # Minimal, systemd-like check. 00:01:53.963 if [[ -e /.dockerenv ]]; then 00:01:53.963 # Clear garbage from the node's name: 00:01:53.963 # agt-er_autotest_547-896 -> autotest_547-896 00:01:53.963 # $HOSTNAME is the actual container id 00:01:53.963 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:53.963 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:53.963 # We can assume this is a mount from a host where container is running, 00:01:53.963 # so fetch its hostname to easily identify the target swarm worker. 00:01:53.963 container="$(< /etc/hostname) ($agent)" 00:01:53.963 else 00:01:53.963 # Fallback 00:01:53.963 container=$agent 00:01:53.963 fi 00:01:53.963 fi 00:01:53.964 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:53.964 00:01:54.234 [Pipeline] } 00:01:54.252 [Pipeline] // withEnv 00:01:54.261 [Pipeline] setCustomBuildProperty 00:01:54.278 [Pipeline] stage 00:01:54.281 [Pipeline] { (Tests) 00:01:54.300 [Pipeline] sh 00:01:54.584 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:54.858 [Pipeline] sh 00:01:55.173 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:55.449 [Pipeline] timeout 00:01:55.449 Timeout set to expire in 1 hr 30 min 00:01:55.451 [Pipeline] { 00:01:55.467 [Pipeline] sh 00:01:55.751 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:56.319 HEAD is now at 097badaeb test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:01:56.331 [Pipeline] sh 00:01:56.613 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:56.885 [Pipeline] sh 00:01:57.167 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:57.445 [Pipeline] sh 00:01:57.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:58.028 ++ readlink -f spdk_repo 00:01:58.028 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:58.028 + [[ -n /home/vagrant/spdk_repo ]] 00:01:58.028 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:58.028 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:58.028 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:58.028 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:58.028 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:58.028 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:58.028 + cd /home/vagrant/spdk_repo 00:01:58.028 + source /etc/os-release 00:01:58.028 ++ NAME='Fedora Linux' 00:01:58.028 ++ VERSION='39 (Cloud Edition)' 00:01:58.028 ++ ID=fedora 00:01:58.028 ++ VERSION_ID=39 00:01:58.028 ++ VERSION_CODENAME= 00:01:58.028 ++ PLATFORM_ID=platform:f39 00:01:58.028 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:58.028 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.028 ++ LOGO=fedora-logo-icon 00:01:58.028 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:58.028 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.028 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:58.028 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.028 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.028 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.029 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:58.029 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.029 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:58.029 ++ SUPPORT_END=2024-11-12 00:01:58.029 ++ VARIANT='Cloud Edition' 00:01:58.029 ++ VARIANT_ID=cloud 00:01:58.029 + uname -a 00:01:58.029 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:58.029 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:58.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:58.598 Hugepages 00:01:58.598 node hugesize free / total 00:01:58.598 node0 1048576kB 0 / 0 00:01:58.598 node0 2048kB 0 / 0 00:01:58.598 00:01:58.598 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:58.598 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:58.598 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:58.598 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:58.598 + rm -f /tmp/spdk-ld-path 00:01:58.598 + source autorun-spdk.conf 00:01:58.598 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.598 ++ SPDK_RUN_ASAN=1 00:01:58.598 ++ SPDK_RUN_UBSAN=1 00:01:58.598 ++ SPDK_TEST_RAID=1 00:01:58.598 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.598 ++ RUN_NIGHTLY=0 00:01:58.598 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:58.598 + [[ -n '' ]] 00:01:58.598 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:58.598 + for M in /var/spdk/build-*-manifest.txt 00:01:58.598 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:58.598 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.598 + for M in /var/spdk/build-*-manifest.txt 00:01:58.598 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:58.598 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.598 + for M in /var/spdk/build-*-manifest.txt 00:01:58.598 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:58.598 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.598 ++ uname 00:01:58.598 + [[ Linux == \L\i\n\u\x ]] 00:01:58.598 + sudo dmesg -T 00:01:58.598 + sudo dmesg --clear 00:01:58.858 + dmesg_pid=5426 00:01:58.858 + sudo dmesg -Tw 00:01:58.858 + [[ Fedora Linux == FreeBSD ]] 00:01:58.858 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.858 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.858 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:58.858 + [[ -x /usr/src/fio-static/fio ]] 00:01:58.858 + export FIO_BIN=/usr/src/fio-static/fio 00:01:58.859 + FIO_BIN=/usr/src/fio-static/fio 00:01:58.859 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:58.859 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:58.859 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:58.859 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.859 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.859 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:58.859 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.859 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.859 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.859 10:25:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:58.859 10:25:02 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.859 10:25:02 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.859 10:25:02 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:58.859 10:25:02 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:58.859 10:25:02 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:58.859 10:25:02 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.859 10:25:02 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:58.859 10:25:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:58.859 10:25:02 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.859 10:25:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:58.859 10:25:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:58.859 10:25:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:58.859 10:25:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:58.859 10:25:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.859 10:25:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.859 10:25:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.859 10:25:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.859 10:25:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.859 10:25:02 -- paths/export.sh@5 -- $ export PATH 00:01:58.859 10:25:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.859 10:25:02 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:58.859 10:25:02 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:58.859 10:25:02 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732098302.XXXXXX 00:01:58.859 10:25:02 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732098302.SGYxEh 00:01:58.859 10:25:02 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:58.859 10:25:02 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:58.859 10:25:02 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:58.859 10:25:02 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:58.859 10:25:02 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:58.859 10:25:02 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:58.859 10:25:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:58.859 10:25:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.859 10:25:02 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:58.859 10:25:02 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:58.859 10:25:02 -- pm/common@17 -- $ local monitor 00:01:58.859 10:25:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.859 10:25:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.859 10:25:02 -- pm/common@25 -- $ sleep 1 00:01:58.859 10:25:02 -- pm/common@21 -- $ date +%s 00:01:58.859 10:25:02 -- pm/common@21 -- $ date +%s 00:01:58.859 10:25:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732098302 00:01:58.859 10:25:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732098302 00:01:59.119 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732098302_collect-cpu-load.pm.log 00:01:59.119 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732098302_collect-vmstat.pm.log 00:02:00.059 10:25:03 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:00.059 10:25:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:00.059 10:25:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:00.059 10:25:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:00.059 10:25:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:00.059 Wed Nov 20 10:25:03 AM UTC 2024 00:02:00.059 10:25:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:00.059 v25.01-pre-206-g097badaeb 00:02:00.059 10:25:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:00.059 10:25:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:00.059 10:25:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:00.059 10:25:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:00.059 10:25:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.059 ************************************ 00:02:00.059 START TEST asan 00:02:00.059 ************************************ 00:02:00.059 using asan 00:02:00.059 10:25:03 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:00.059 00:02:00.059 real 0m0.000s 00:02:00.059 user 0m0.000s 00:02:00.059 sys 0m0.000s 00:02:00.059 10:25:03 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:00.059 10:25:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.059 ************************************ 00:02:00.059 END TEST asan 00:02:00.059 ************************************ 00:02:00.059 10:25:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:00.059 10:25:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:00.059 10:25:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:00.059 10:25:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:00.059 10:25:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.059 ************************************ 00:02:00.059 START TEST ubsan 00:02:00.059 ************************************ 00:02:00.059 using ubsan 00:02:00.059 10:25:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:00.059 00:02:00.059 real 0m0.000s 00:02:00.059 user 0m0.000s 00:02:00.059 sys 0m0.000s 00:02:00.059 10:25:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:00.059 10:25:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.059 ************************************ 00:02:00.059 END TEST ubsan 00:02:00.059 ************************************ 00:02:00.059 10:25:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:00.059 10:25:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:00.059 10:25:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:00.059 10:25:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:00.059 10:25:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:00.059 10:25:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:00.059 10:25:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:00.059 10:25:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:00.059 10:25:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:00.318 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:00.318 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.885 Using 'verbs' RDMA provider 00:02:16.798 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:31.681 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:31.681 Creating mk/config.mk...done. 00:02:31.681 Creating mk/cc.flags.mk...done. 00:02:31.681 Type 'make' to build. 00:02:31.681 10:25:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:31.681 10:25:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:31.681 10:25:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:31.681 10:25:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.681 ************************************ 00:02:31.681 START TEST make 00:02:31.681 ************************************ 00:02:31.681 10:25:34 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:31.681 make[1]: Nothing to be done for 'all'. 00:02:43.968 The Meson build system 00:02:43.968 Version: 1.5.0 00:02:43.968 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:43.968 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:43.968 Build type: native build 00:02:43.968 Program cat found: YES (/usr/bin/cat) 00:02:43.968 Project name: DPDK 00:02:43.968 Project version: 24.03.0 00:02:43.968 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.968 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.968 Host machine cpu family: x86_64 00:02:43.968 Host machine cpu: x86_64 00:02:43.968 Message: ## Building in Developer Mode ## 00:02:43.968 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.968 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:43.968 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.968 Program python3 found: YES (/usr/bin/python3) 00:02:43.968 Program cat found: YES (/usr/bin/cat) 00:02:43.968 Compiler for C supports arguments -march=native: YES 00:02:43.968 Checking for size of "void *" : 8 00:02:43.968 Checking for size of "void *" : 8 (cached) 00:02:43.968 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:43.968 Library m found: YES 00:02:43.968 Library numa found: YES 00:02:43.968 Has header "numaif.h" : YES 00:02:43.968 Library fdt found: NO 00:02:43.968 Library execinfo found: NO 00:02:43.968 Has header "execinfo.h" : YES 00:02:43.968 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.968 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.968 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.968 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.968 Run-time dependency openssl found: YES 3.1.1 00:02:43.968 Run-time dependency libpcap found: YES 1.10.4 00:02:43.968 Has header "pcap.h" with dependency libpcap: YES 00:02:43.968 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.968 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.968 Compiler for C supports arguments -Wformat: YES 00:02:43.968 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.968 Compiler for C supports arguments -Wformat-security: NO 00:02:43.968 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.968 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.968 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.968 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.968 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.968 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.968 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.968 Compiler for C supports arguments -Wundef: YES 00:02:43.968 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.968 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.968 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.968 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.968 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.968 Program objdump found: YES (/usr/bin/objdump) 00:02:43.968 Compiler for C supports arguments -mavx512f: YES 00:02:43.968 Checking if "AVX512 checking" compiles: YES 00:02:43.968 Fetching value of define "__SSE4_2__" : 1 00:02:43.968 Fetching value of define "__AES__" : 1 00:02:43.968 Fetching value of define "__AVX__" : 1 00:02:43.968 Fetching value of define "__AVX2__" : 1 00:02:43.968 Fetching value of define "__AVX512BW__" : 1 00:02:43.968 Fetching value of define "__AVX512CD__" : 1 00:02:43.968 Fetching value of define "__AVX512DQ__" : 1 00:02:43.968 Fetching value of define "__AVX512F__" : 1 00:02:43.968 Fetching value of define "__AVX512VL__" : 1 00:02:43.968 Fetching value of define "__PCLMUL__" : 1 00:02:43.968 Fetching value of define "__RDRND__" : 1 00:02:43.968 Fetching value of define "__RDSEED__" : 1 00:02:43.968 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:43.968 Fetching value of define "__znver1__" : (undefined) 00:02:43.968 Fetching value of define "__znver2__" : (undefined) 00:02:43.968 Fetching value of define "__znver3__" : (undefined) 00:02:43.968 Fetching value of define "__znver4__" : (undefined) 00:02:43.968 Library asan found: YES 00:02:43.968 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.969 Message: lib/log: Defining dependency "log" 00:02:43.969 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.969 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.969 Library rt found: YES 00:02:43.969 Checking for function "getentropy" : NO 00:02:43.969 Message: lib/eal: Defining dependency "eal" 00:02:43.969 Message: lib/ring: Defining dependency "ring" 00:02:43.969 Message: lib/rcu: Defining dependency "rcu" 00:02:43.969 Message: lib/mempool: Defining dependency "mempool" 00:02:43.969 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.969 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.969 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.969 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.969 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.969 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.969 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:43.969 Compiler for C supports arguments -mpclmul: YES 00:02:43.969 Compiler for C supports arguments -maes: YES 00:02:43.969 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.969 Compiler for C supports arguments -mavx512bw: YES 00:02:43.969 Compiler for C supports arguments -mavx512dq: YES 00:02:43.969 Compiler for C supports arguments -mavx512vl: YES 00:02:43.969 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.969 Compiler for C supports arguments -mavx2: YES 00:02:43.969 Compiler for C supports arguments -mavx: YES 00:02:43.969 Message: lib/net: Defining dependency "net" 00:02:43.969 Message: lib/meter: Defining dependency "meter" 00:02:43.969 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.969 Message: lib/pci: Defining dependency "pci" 00:02:43.969 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.969 Message: lib/hash: Defining dependency "hash" 00:02:43.969 Message: lib/timer: Defining dependency "timer" 00:02:43.969 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.969 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.969 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.969 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.969 Message: lib/power: Defining dependency "power" 00:02:43.969 Message: lib/reorder: Defining dependency "reorder" 00:02:43.969 Message: lib/security: Defining dependency "security" 00:02:43.969 Has header "linux/userfaultfd.h" : YES 00:02:43.969 Has header "linux/vduse.h" : YES 00:02:43.969 Message: lib/vhost: Defining dependency "vhost" 00:02:43.969 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.969 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.969 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.969 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.969 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:43.969 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:43.969 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:43.969 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:43.969 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:43.969 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:43.969 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.969 Configuring doxy-api-html.conf using configuration 00:02:43.969 Configuring doxy-api-man.conf using configuration 00:02:43.969 Program mandb found: YES (/usr/bin/mandb) 00:02:43.969 Program sphinx-build found: NO 00:02:43.969 Configuring rte_build_config.h using configuration 00:02:43.969 Message: 00:02:43.969 ================= 00:02:43.969 Applications Enabled 00:02:43.969 ================= 00:02:43.969 00:02:43.969 apps: 00:02:43.969 00:02:43.969 00:02:43.969 Message: 00:02:43.969 ================= 00:02:43.969 Libraries Enabled 00:02:43.969 ================= 00:02:43.969 00:02:43.969 libs: 00:02:43.969 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:43.969 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:43.969 cryptodev, dmadev, power, reorder, security, vhost, 00:02:43.969 00:02:43.969 Message: 00:02:43.969 =============== 00:02:43.969 Drivers Enabled 00:02:43.969 =============== 00:02:43.969 00:02:43.969 common: 00:02:43.969 00:02:43.969 bus: 00:02:43.969 pci, vdev, 00:02:43.969 mempool: 00:02:43.969 ring, 00:02:43.969 dma: 00:02:43.969 00:02:43.969 net: 00:02:43.969 00:02:43.969 crypto: 00:02:43.969 00:02:43.969 compress: 00:02:43.969 00:02:43.969 vdpa: 00:02:43.969 00:02:43.969 00:02:43.969 Message: 00:02:43.969 ================= 00:02:43.969 Content Skipped 00:02:43.969 ================= 00:02:43.969 00:02:43.969 apps: 00:02:43.969 dumpcap: explicitly disabled via build config 00:02:43.969 graph: explicitly disabled via build config 00:02:43.969 pdump: explicitly disabled via build config 00:02:43.969 proc-info: explicitly disabled via build config 00:02:43.969 test-acl: explicitly disabled via build config 00:02:43.969 test-bbdev: explicitly disabled via build config 00:02:43.969 test-cmdline: explicitly disabled via build config 00:02:43.969 test-compress-perf: explicitly disabled via build config 00:02:43.969 test-crypto-perf: explicitly disabled via build config 00:02:43.969 test-dma-perf: explicitly disabled via build config 00:02:43.969 test-eventdev: explicitly disabled via build config 00:02:43.969 test-fib: explicitly disabled via build config 00:02:43.969 test-flow-perf: explicitly disabled via build config 00:02:43.969 test-gpudev: explicitly disabled via build config 00:02:43.969 test-mldev: explicitly disabled via build config 00:02:43.969 test-pipeline: explicitly disabled via build config 00:02:43.969 test-pmd: explicitly disabled via build config 00:02:43.969 test-regex: explicitly disabled via build config 00:02:43.969 test-sad: explicitly disabled via build config 00:02:43.969 test-security-perf: explicitly disabled via build config 00:02:43.969 00:02:43.969 libs: 00:02:43.969 argparse: explicitly disabled via build config 00:02:43.969 metrics: explicitly disabled via build config 00:02:43.969 acl: explicitly disabled via build config 00:02:43.969 bbdev: explicitly disabled via build config 00:02:43.969 bitratestats: explicitly disabled via build config 00:02:43.969 bpf: explicitly disabled via build config 00:02:43.969 cfgfile: explicitly disabled via build config 00:02:43.969 distributor: explicitly disabled via build config 00:02:43.969 efd: explicitly disabled via build config 00:02:43.969 eventdev: explicitly disabled via build config 00:02:43.969 dispatcher: explicitly disabled via build config 00:02:43.969 gpudev: explicitly disabled via build config 00:02:43.969 gro: explicitly disabled via build config 00:02:43.969 gso: explicitly disabled via build config 00:02:43.969 ip_frag: explicitly disabled via build config 00:02:43.969 jobstats: explicitly disabled via build config 00:02:43.969 latencystats: explicitly disabled via build config 00:02:43.969 lpm: explicitly disabled via build config 00:02:43.969 member: explicitly disabled via build config 00:02:43.969 pcapng: explicitly disabled via build config 00:02:43.969 rawdev: explicitly disabled via build config 00:02:43.969 regexdev: explicitly disabled via build config 00:02:43.969 mldev: explicitly disabled via build config 00:02:43.969 rib: explicitly disabled via build config 00:02:43.969 sched: explicitly disabled via build config 00:02:43.969 stack: explicitly disabled via build config 00:02:43.969 ipsec: explicitly disabled via build config 00:02:43.969 pdcp: explicitly disabled via build config 00:02:43.969 fib: explicitly disabled via build config 00:02:43.969 port: explicitly disabled via build config 00:02:43.969 pdump: explicitly disabled via build config 00:02:43.969 table: explicitly disabled via build config 00:02:43.969 pipeline: explicitly disabled via build config 00:02:43.969 graph: explicitly disabled via build config 00:02:43.969 node: explicitly disabled via build config 00:02:43.969 00:02:43.969 drivers: 00:02:43.969 common/cpt: not in enabled drivers build config 00:02:43.969 common/dpaax: not in enabled drivers build config 00:02:43.969 common/iavf: not in enabled drivers build config 00:02:43.969 common/idpf: not in enabled drivers build config 00:02:43.969 common/ionic: not in enabled drivers build config 00:02:43.969 common/mvep: not in enabled drivers build config 00:02:43.969 common/octeontx: not in enabled drivers build config 00:02:43.969 bus/auxiliary: not in enabled drivers build config 00:02:43.969 bus/cdx: not in enabled drivers build config 00:02:43.969 bus/dpaa: not in enabled drivers build config 00:02:43.969 bus/fslmc: not in enabled drivers build config 00:02:43.969 bus/ifpga: not in enabled drivers build config 00:02:43.969 bus/platform: not in enabled drivers build config 00:02:43.969 bus/uacce: not in enabled drivers build config 00:02:43.969 bus/vmbus: not in enabled drivers build config 00:02:43.969 common/cnxk: not in enabled drivers build config 00:02:43.969 common/mlx5: not in enabled drivers build config 00:02:43.969 common/nfp: not in enabled drivers build config 00:02:43.969 common/nitrox: not in enabled drivers build config 00:02:43.969 common/qat: not in enabled drivers build config 00:02:43.969 common/sfc_efx: not in enabled drivers build config 00:02:43.969 mempool/bucket: not in enabled drivers build config 00:02:43.969 mempool/cnxk: not in enabled drivers build config 00:02:43.969 mempool/dpaa: not in enabled drivers build config 00:02:43.969 mempool/dpaa2: not in enabled drivers build config 00:02:43.969 mempool/octeontx: not in enabled drivers build config 00:02:43.969 mempool/stack: not in enabled drivers build config 00:02:43.969 dma/cnxk: not in enabled drivers build config 00:02:43.969 dma/dpaa: not in enabled drivers build config 00:02:43.969 dma/dpaa2: not in enabled drivers build config 00:02:43.969 dma/hisilicon: not in enabled drivers build config 00:02:43.969 dma/idxd: not in enabled drivers build config 00:02:43.969 dma/ioat: not in enabled drivers build config 00:02:43.969 dma/skeleton: not in enabled drivers build config 00:02:43.969 net/af_packet: not in enabled drivers build config 00:02:43.969 net/af_xdp: not in enabled drivers build config 00:02:43.969 net/ark: not in enabled drivers build config 00:02:43.969 net/atlantic: not in enabled drivers build config 00:02:43.969 net/avp: not in enabled drivers build config 00:02:43.969 net/axgbe: not in enabled drivers build config 00:02:43.970 net/bnx2x: not in enabled drivers build config 00:02:43.970 net/bnxt: not in enabled drivers build config 00:02:43.970 net/bonding: not in enabled drivers build config 00:02:43.970 net/cnxk: not in enabled drivers build config 00:02:43.970 net/cpfl: not in enabled drivers build config 00:02:43.970 net/cxgbe: not in enabled drivers build config 00:02:43.970 net/dpaa: not in enabled drivers build config 00:02:43.970 net/dpaa2: not in enabled drivers build config 00:02:43.970 net/e1000: not in enabled drivers build config 00:02:43.970 net/ena: not in enabled drivers build config 00:02:43.970 net/enetc: not in enabled drivers build config 00:02:43.970 net/enetfec: not in enabled drivers build config 00:02:43.970 net/enic: not in enabled drivers build config 00:02:43.970 net/failsafe: not in enabled drivers build config 00:02:43.970 net/fm10k: not in enabled drivers build config 00:02:43.970 net/gve: not in enabled drivers build config 00:02:43.970 net/hinic: not in enabled drivers build config 00:02:43.970 net/hns3: not in enabled drivers build config 00:02:43.970 net/i40e: not in enabled drivers build config 00:02:43.970 net/iavf: not in enabled drivers build config 00:02:43.970 net/ice: not in enabled drivers build config 00:02:43.970 net/idpf: not in enabled drivers build config 00:02:43.970 net/igc: not in enabled drivers build config 00:02:43.970 net/ionic: not in enabled drivers build config 00:02:43.970 net/ipn3ke: not in enabled drivers build config 00:02:43.970 net/ixgbe: not in enabled drivers build config 00:02:43.970 net/mana: not in enabled drivers build config 00:02:43.970 net/memif: not in enabled drivers build config 00:02:43.970 net/mlx4: not in enabled drivers build config 00:02:43.970 net/mlx5: not in enabled drivers build config 00:02:43.970 net/mvneta: not in enabled drivers build config 00:02:43.970 net/mvpp2: not in enabled drivers build config 00:02:43.970 net/netvsc: not in enabled drivers build config 00:02:43.970 net/nfb: not in enabled drivers build config 00:02:43.970 net/nfp: not in enabled drivers build config 00:02:43.970 net/ngbe: not in enabled drivers build config 00:02:43.970 net/null: not in enabled drivers build config 00:02:43.970 net/octeontx: not in enabled drivers build config 00:02:43.970 net/octeon_ep: not in enabled drivers build config 00:02:43.970 net/pcap: not in enabled drivers build config 00:02:43.970 net/pfe: not in enabled drivers build config 00:02:43.970 net/qede: not in enabled drivers build config 00:02:43.970 net/ring: not in enabled drivers build config 00:02:43.970 net/sfc: not in enabled drivers build config 00:02:43.970 net/softnic: not in enabled drivers build config 00:02:43.970 net/tap: not in enabled drivers build config 00:02:43.970 net/thunderx: not in enabled drivers build config 00:02:43.970 net/txgbe: not in enabled drivers build config 00:02:43.970 net/vdev_netvsc: not in enabled drivers build config 00:02:43.970 net/vhost: not in enabled drivers build config 00:02:43.970 net/virtio: not in enabled drivers build config 00:02:43.970 net/vmxnet3: not in enabled drivers build config 00:02:43.970 raw/*: missing internal dependency, "rawdev" 00:02:43.970 crypto/armv8: not in enabled drivers build config 00:02:43.970 crypto/bcmfs: not in enabled drivers build config 00:02:43.970 crypto/caam_jr: not in enabled drivers build config 00:02:43.970 crypto/ccp: not in enabled drivers build config 00:02:43.970 crypto/cnxk: not in enabled drivers build config 00:02:43.970 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.970 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.970 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.970 crypto/mlx5: not in enabled drivers build config 00:02:43.970 crypto/mvsam: not in enabled drivers build config 00:02:43.970 crypto/nitrox: not in enabled drivers build config 00:02:43.970 crypto/null: not in enabled drivers build config 00:02:43.970 crypto/octeontx: not in enabled drivers build config 00:02:43.970 crypto/openssl: not in enabled drivers build config 00:02:43.970 crypto/scheduler: not in enabled drivers build config 00:02:43.970 crypto/uadk: not in enabled drivers build config 00:02:43.970 crypto/virtio: not in enabled drivers build config 00:02:43.970 compress/isal: not in enabled drivers build config 00:02:43.970 compress/mlx5: not in enabled drivers build config 00:02:43.970 compress/nitrox: not in enabled drivers build config 00:02:43.970 compress/octeontx: not in enabled drivers build config 00:02:43.970 compress/zlib: not in enabled drivers build config 00:02:43.970 regex/*: missing internal dependency, "regexdev" 00:02:43.970 ml/*: missing internal dependency, "mldev" 00:02:43.970 vdpa/ifc: not in enabled drivers build config 00:02:43.970 vdpa/mlx5: not in enabled drivers build config 00:02:43.970 vdpa/nfp: not in enabled drivers build config 00:02:43.970 vdpa/sfc: not in enabled drivers build config 00:02:43.970 event/*: missing internal dependency, "eventdev" 00:02:43.970 baseband/*: missing internal dependency, "bbdev" 00:02:43.970 gpu/*: missing internal dependency, "gpudev" 00:02:43.970 00:02:43.970 00:02:43.970 Build targets in project: 85 00:02:43.970 00:02:43.970 DPDK 24.03.0 00:02:43.970 00:02:43.970 User defined options 00:02:43.970 buildtype : debug 00:02:43.970 default_library : shared 00:02:43.970 libdir : lib 00:02:43.970 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:43.970 b_sanitize : address 00:02:43.970 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:43.970 c_link_args : 00:02:43.970 cpu_instruction_set: native 00:02:43.970 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:43.970 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:43.970 enable_docs : false 00:02:43.970 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:43.970 enable_kmods : false 00:02:43.970 max_lcores : 128 00:02:43.970 tests : false 00:02:43.970 00:02:43.970 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.970 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:43.970 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.970 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.970 [3/268] Linking static target lib/librte_log.a 00:02:43.970 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.970 [5/268] Linking static target lib/librte_kvargs.a 00:02:43.970 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.970 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.970 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.970 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.970 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.970 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.970 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.970 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.970 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.970 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.970 [16/268] Linking static target lib/librte_telemetry.a 00:02:43.970 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.970 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.229 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.229 [20/268] Linking target lib/librte_log.so.24.1 00:02:44.488 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.488 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.488 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.488 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.488 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.488 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.488 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.488 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.747 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:44.747 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.747 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.747 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.747 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.005 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.006 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:45.006 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.268 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.268 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.268 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.268 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.268 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.268 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.268 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.268 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.530 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:45.530 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.530 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.530 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.788 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.788 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.046 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:46.046 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:46.046 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.046 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.046 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.046 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:46.304 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.304 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:46.304 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:46.563 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.563 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:46.563 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.563 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.563 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.563 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.563 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.821 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.079 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:47.079 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:47.079 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:47.079 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:47.079 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:47.338 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:47.338 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.338 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:47.338 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:47.338 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.338 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.598 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.598 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:47.598 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.598 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.858 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.858 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.858 [85/268] Linking static target lib/librte_ring.a 00:02:48.117 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.117 [87/268] Linking static target lib/librte_eal.a 00:02:48.117 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.117 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.117 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.375 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.375 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.634 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.634 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:48.634 [95/268] Linking static target lib/librte_mempool.a 00:02:48.634 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.634 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:48.634 [98/268] Linking static target lib/librte_rcu.a 00:02:48.634 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:48.634 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:48.893 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.893 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.893 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.152 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.152 [105/268] Linking static target lib/librte_meter.a 00:02:49.152 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.152 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.152 [108/268] Linking static target lib/librte_mbuf.a 00:02:49.152 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.152 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.152 [111/268] Linking static target lib/librte_net.a 00:02:49.411 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.411 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.411 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.744 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.744 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.744 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.744 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.001 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.259 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.259 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.259 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.518 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.518 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.777 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.777 [126/268] Linking static target lib/librte_pci.a 00:02:50.777 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:50.777 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.777 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.777 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.036 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.036 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.036 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.036 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.036 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.036 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.036 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.036 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.036 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.296 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.296 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.296 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.296 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.296 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.296 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.296 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.296 [147/268] Linking static target lib/librte_cmdline.a 00:02:51.557 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.816 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.816 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.816 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.816 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.816 [153/268] Linking static target lib/librte_timer.a 00:02:52.076 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.076 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.334 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.334 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.592 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.592 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.592 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.592 [161/268] Linking static target lib/librte_hash.a 00:02:52.850 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.850 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.850 [164/268] Linking static target lib/librte_dmadev.a 00:02:52.850 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.850 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.850 [167/268] Linking static target lib/librte_compressdev.a 00:02:52.850 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.109 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.109 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.109 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.109 [172/268] Linking static target lib/librte_ethdev.a 00:02:53.368 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.368 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.627 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.627 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.627 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.627 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.627 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.627 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.628 [181/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.886 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.145 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.145 [184/268] Linking static target lib/librte_power.a 00:02:54.145 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.145 [186/268] Linking static target lib/librte_cryptodev.a 00:02:54.145 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.145 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.145 [189/268] Linking static target lib/librte_reorder.a 00:02:54.403 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.660 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.660 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.660 [193/268] Linking static target lib/librte_security.a 00:02:54.928 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.928 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.185 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.443 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.443 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.443 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.443 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.703 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.703 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:55.960 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.960 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.960 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.219 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.219 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.219 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.219 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.219 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.478 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.478 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.478 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.478 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:56.736 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.736 [216/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.736 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.736 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.736 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:56.736 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:56.736 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:56.994 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:56.994 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.995 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.995 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:56.995 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.253 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.252 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.838 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.097 [230/268] Linking target lib/librte_eal.so.24.1 00:02:59.097 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.355 [232/268] Linking target lib/librte_ring.so.24.1 00:02:59.355 [233/268] Linking target lib/librte_pci.so.24.1 00:02:59.355 [234/268] Linking target lib/librte_timer.so.24.1 00:02:59.355 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.355 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:59.355 [237/268] Linking target lib/librte_meter.so.24.1 00:02:59.355 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.355 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.355 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:59.355 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:59.355 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:59.355 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:59.355 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.355 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.615 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.615 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.615 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.615 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:59.875 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.875 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:59.875 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:59.875 [253/268] Linking target lib/librte_net.so.24.1 00:02:59.875 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:59.875 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.875 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:00.133 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:00.133 [258/268] Linking target lib/librte_hash.so.24.1 00:03:00.133 [259/268] Linking target lib/librte_security.so.24.1 00:03:00.133 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:02.035 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.035 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:02.292 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:02.292 [264/268] Linking target lib/librte_power.so.24.1 00:03:02.549 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.549 [266/268] Linking static target lib/librte_vhost.a 00:03:05.080 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.080 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:05.080 INFO: autodetecting backend as ninja 00:03:05.080 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:23.159 CC lib/ut_mock/mock.o 00:03:23.159 CC lib/log/log_flags.o 00:03:23.159 CC lib/log/log_deprecated.o 00:03:23.159 CC lib/log/log.o 00:03:23.159 CC lib/ut/ut.o 00:03:23.418 LIB libspdk_ut_mock.a 00:03:23.418 LIB libspdk_log.a 00:03:23.418 LIB libspdk_ut.a 00:03:23.418 SO libspdk_ut_mock.so.6.0 00:03:23.418 SO libspdk_ut.so.2.0 00:03:23.418 SO libspdk_log.so.7.1 00:03:23.418 SYMLINK libspdk_ut_mock.so 00:03:23.418 SYMLINK libspdk_ut.so 00:03:23.418 SYMLINK libspdk_log.so 00:03:23.678 CC lib/ioat/ioat.o 00:03:23.678 CC lib/util/base64.o 00:03:23.678 CC lib/util/cpuset.o 00:03:23.678 CC lib/util/crc32.o 00:03:23.678 CC lib/util/crc16.o 00:03:23.678 CC lib/util/crc32c.o 00:03:23.678 CC lib/util/bit_array.o 00:03:23.678 CXX lib/trace_parser/trace.o 00:03:23.678 CC lib/dma/dma.o 00:03:23.937 CC lib/vfio_user/host/vfio_user_pci.o 00:03:23.937 CC lib/util/crc32_ieee.o 00:03:23.937 CC lib/util/crc64.o 00:03:23.937 CC lib/vfio_user/host/vfio_user.o 00:03:23.937 CC lib/util/dif.o 00:03:23.937 CC lib/util/fd.o 00:03:23.937 CC lib/util/fd_group.o 00:03:23.937 LIB libspdk_dma.a 00:03:23.937 CC lib/util/file.o 00:03:23.937 CC lib/util/hexlify.o 00:03:23.937 SO libspdk_dma.so.5.0 00:03:23.937 LIB libspdk_ioat.a 00:03:23.937 SO libspdk_ioat.so.7.0 00:03:24.197 SYMLINK libspdk_dma.so 00:03:24.197 CC lib/util/iov.o 00:03:24.197 CC lib/util/math.o 00:03:24.197 SYMLINK libspdk_ioat.so 00:03:24.197 CC lib/util/net.o 00:03:24.197 CC lib/util/pipe.o 00:03:24.197 LIB libspdk_vfio_user.a 00:03:24.197 CC lib/util/strerror_tls.o 00:03:24.197 SO libspdk_vfio_user.so.5.0 00:03:24.197 CC lib/util/string.o 00:03:24.197 SYMLINK libspdk_vfio_user.so 00:03:24.197 CC lib/util/uuid.o 00:03:24.197 CC lib/util/xor.o 00:03:24.197 CC lib/util/zipf.o 00:03:24.197 CC lib/util/md5.o 00:03:24.766 LIB libspdk_util.a 00:03:24.766 SO libspdk_util.so.10.1 00:03:24.766 LIB libspdk_trace_parser.a 00:03:24.766 SO libspdk_trace_parser.so.6.0 00:03:24.766 SYMLINK libspdk_util.so 00:03:25.026 SYMLINK libspdk_trace_parser.so 00:03:25.026 CC lib/conf/conf.o 00:03:25.026 CC lib/json/json_util.o 00:03:25.026 CC lib/json/json_parse.o 00:03:25.026 CC lib/json/json_write.o 00:03:25.026 CC lib/idxd/idxd.o 00:03:25.026 CC lib/idxd/idxd_user.o 00:03:25.026 CC lib/idxd/idxd_kernel.o 00:03:25.026 CC lib/env_dpdk/env.o 00:03:25.026 CC lib/rdma_utils/rdma_utils.o 00:03:25.026 CC lib/vmd/vmd.o 00:03:25.285 CC lib/vmd/led.o 00:03:25.285 LIB libspdk_conf.a 00:03:25.285 CC lib/env_dpdk/memory.o 00:03:25.285 SO libspdk_conf.so.6.0 00:03:25.285 CC lib/env_dpdk/pci.o 00:03:25.285 CC lib/env_dpdk/init.o 00:03:25.285 SYMLINK libspdk_conf.so 00:03:25.285 CC lib/env_dpdk/threads.o 00:03:25.285 LIB libspdk_json.a 00:03:25.546 LIB libspdk_rdma_utils.a 00:03:25.546 SO libspdk_rdma_utils.so.1.0 00:03:25.546 SO libspdk_json.so.6.0 00:03:25.546 CC lib/env_dpdk/pci_ioat.o 00:03:25.546 SYMLINK libspdk_rdma_utils.so 00:03:25.546 SYMLINK libspdk_json.so 00:03:25.546 CC lib/env_dpdk/pci_virtio.o 00:03:25.546 CC lib/env_dpdk/pci_vmd.o 00:03:25.546 CC lib/env_dpdk/pci_idxd.o 00:03:25.546 CC lib/env_dpdk/pci_event.o 00:03:25.805 CC lib/rdma_provider/common.o 00:03:25.805 CC lib/env_dpdk/sigbus_handler.o 00:03:25.805 CC lib/env_dpdk/pci_dpdk.o 00:03:25.805 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.805 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.805 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.805 LIB libspdk_idxd.a 00:03:25.805 SO libspdk_idxd.so.12.1 00:03:26.065 LIB libspdk_vmd.a 00:03:26.065 SYMLINK libspdk_idxd.so 00:03:26.065 SO libspdk_vmd.so.6.0 00:03:26.065 SYMLINK libspdk_vmd.so 00:03:26.065 LIB libspdk_rdma_provider.a 00:03:26.065 CC lib/jsonrpc/jsonrpc_server.o 00:03:26.065 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:26.065 CC lib/jsonrpc/jsonrpc_client.o 00:03:26.065 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:26.065 SO libspdk_rdma_provider.so.7.0 00:03:26.065 SYMLINK libspdk_rdma_provider.so 00:03:26.324 LIB libspdk_jsonrpc.a 00:03:26.702 SO libspdk_jsonrpc.so.6.0 00:03:26.702 SYMLINK libspdk_jsonrpc.so 00:03:26.961 CC lib/rpc/rpc.o 00:03:26.961 LIB libspdk_env_dpdk.a 00:03:26.961 SO libspdk_env_dpdk.so.15.1 00:03:27.221 LIB libspdk_rpc.a 00:03:27.221 SYMLINK libspdk_env_dpdk.so 00:03:27.221 SO libspdk_rpc.so.6.0 00:03:27.221 SYMLINK libspdk_rpc.so 00:03:27.788 CC lib/notify/notify.o 00:03:27.788 CC lib/notify/notify_rpc.o 00:03:27.788 CC lib/trace/trace.o 00:03:27.788 CC lib/trace/trace_flags.o 00:03:27.788 CC lib/trace/trace_rpc.o 00:03:27.788 CC lib/keyring/keyring.o 00:03:27.788 CC lib/keyring/keyring_rpc.o 00:03:27.788 LIB libspdk_notify.a 00:03:27.788 SO libspdk_notify.so.6.0 00:03:28.047 SYMLINK libspdk_notify.so 00:03:28.047 LIB libspdk_keyring.a 00:03:28.047 LIB libspdk_trace.a 00:03:28.047 SO libspdk_keyring.so.2.0 00:03:28.047 SO libspdk_trace.so.11.0 00:03:28.047 SYMLINK libspdk_keyring.so 00:03:28.047 SYMLINK libspdk_trace.so 00:03:28.617 CC lib/sock/sock.o 00:03:28.617 CC lib/sock/sock_rpc.o 00:03:28.617 CC lib/thread/thread.o 00:03:28.617 CC lib/thread/iobuf.o 00:03:28.876 LIB libspdk_sock.a 00:03:28.876 SO libspdk_sock.so.10.0 00:03:29.135 SYMLINK libspdk_sock.so 00:03:29.395 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:29.395 CC lib/nvme/nvme_ctrlr.o 00:03:29.395 CC lib/nvme/nvme_fabric.o 00:03:29.395 CC lib/nvme/nvme_ns_cmd.o 00:03:29.395 CC lib/nvme/nvme_ns.o 00:03:29.395 CC lib/nvme/nvme_pcie_common.o 00:03:29.395 CC lib/nvme/nvme_qpair.o 00:03:29.395 CC lib/nvme/nvme_pcie.o 00:03:29.395 CC lib/nvme/nvme.o 00:03:30.335 CC lib/nvme/nvme_quirks.o 00:03:30.335 CC lib/nvme/nvme_transport.o 00:03:30.335 LIB libspdk_thread.a 00:03:30.335 SO libspdk_thread.so.11.0 00:03:30.335 CC lib/nvme/nvme_discovery.o 00:03:30.335 SYMLINK libspdk_thread.so 00:03:30.335 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.594 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.594 CC lib/nvme/nvme_tcp.o 00:03:30.594 CC lib/nvme/nvme_opal.o 00:03:30.594 CC lib/accel/accel.o 00:03:30.594 CC lib/nvme/nvme_io_msg.o 00:03:30.853 CC lib/blob/blobstore.o 00:03:30.853 CC lib/blob/request.o 00:03:30.853 CC lib/init/json_config.o 00:03:30.853 CC lib/init/subsystem.o 00:03:31.112 CC lib/init/subsystem_rpc.o 00:03:31.112 CC lib/init/rpc.o 00:03:31.112 CC lib/blob/zeroes.o 00:03:31.112 CC lib/blob/blob_bs_dev.o 00:03:31.112 CC lib/accel/accel_rpc.o 00:03:31.371 LIB libspdk_init.a 00:03:31.371 SO libspdk_init.so.6.0 00:03:31.371 SYMLINK libspdk_init.so 00:03:31.371 CC lib/nvme/nvme_poll_group.o 00:03:31.371 CC lib/accel/accel_sw.o 00:03:31.371 CC lib/virtio/virtio.o 00:03:31.371 CC lib/virtio/virtio_vhost_user.o 00:03:31.631 CC lib/fsdev/fsdev.o 00:03:31.631 CC lib/event/app.o 00:03:31.631 CC lib/fsdev/fsdev_io.o 00:03:31.890 LIB libspdk_accel.a 00:03:31.890 CC lib/virtio/virtio_vfio_user.o 00:03:31.890 SO libspdk_accel.so.16.0 00:03:31.890 CC lib/virtio/virtio_pci.o 00:03:31.890 SYMLINK libspdk_accel.so 00:03:31.890 CC lib/nvme/nvme_zns.o 00:03:32.150 CC lib/fsdev/fsdev_rpc.o 00:03:32.150 CC lib/event/reactor.o 00:03:32.150 CC lib/event/log_rpc.o 00:03:32.150 CC lib/nvme/nvme_stubs.o 00:03:32.150 CC lib/nvme/nvme_auth.o 00:03:32.150 CC lib/event/app_rpc.o 00:03:32.150 LIB libspdk_virtio.a 00:03:32.150 SO libspdk_virtio.so.7.0 00:03:32.150 LIB libspdk_fsdev.a 00:03:32.150 CC lib/bdev/bdev.o 00:03:32.411 SO libspdk_fsdev.so.2.0 00:03:32.411 CC lib/bdev/bdev_rpc.o 00:03:32.411 SYMLINK libspdk_virtio.so 00:03:32.411 CC lib/bdev/bdev_zone.o 00:03:32.411 SYMLINK libspdk_fsdev.so 00:03:32.411 CC lib/nvme/nvme_cuse.o 00:03:32.411 CC lib/event/scheduler_static.o 00:03:32.411 CC lib/bdev/part.o 00:03:32.671 CC lib/bdev/scsi_nvme.o 00:03:32.671 LIB libspdk_event.a 00:03:32.671 CC lib/nvme/nvme_rdma.o 00:03:32.671 SO libspdk_event.so.14.0 00:03:32.671 SYMLINK libspdk_event.so 00:03:32.671 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:33.609 LIB libspdk_fuse_dispatcher.a 00:03:33.609 SO libspdk_fuse_dispatcher.so.1.0 00:03:33.609 SYMLINK libspdk_fuse_dispatcher.so 00:03:34.179 LIB libspdk_nvme.a 00:03:34.179 SO libspdk_nvme.so.15.0 00:03:34.439 SYMLINK libspdk_nvme.so 00:03:34.699 LIB libspdk_blob.a 00:03:34.699 SO libspdk_blob.so.11.0 00:03:34.958 SYMLINK libspdk_blob.so 00:03:35.217 CC lib/lvol/lvol.o 00:03:35.217 CC lib/blobfs/tree.o 00:03:35.217 CC lib/blobfs/blobfs.o 00:03:35.477 LIB libspdk_bdev.a 00:03:35.477 SO libspdk_bdev.so.17.0 00:03:35.736 SYMLINK libspdk_bdev.so 00:03:35.993 CC lib/nbd/nbd_rpc.o 00:03:35.993 CC lib/nbd/nbd.o 00:03:35.993 CC lib/ftl/ftl_core.o 00:03:35.993 CC lib/ftl/ftl_init.o 00:03:35.993 CC lib/ftl/ftl_layout.o 00:03:35.993 CC lib/ublk/ublk.o 00:03:35.993 CC lib/nvmf/ctrlr.o 00:03:35.993 CC lib/scsi/dev.o 00:03:35.993 CC lib/ublk/ublk_rpc.o 00:03:36.251 CC lib/ftl/ftl_debug.o 00:03:36.251 CC lib/scsi/lun.o 00:03:36.251 LIB libspdk_blobfs.a 00:03:36.251 CC lib/scsi/port.o 00:03:36.251 CC lib/nvmf/ctrlr_discovery.o 00:03:36.251 LIB libspdk_lvol.a 00:03:36.251 SO libspdk_blobfs.so.10.0 00:03:36.251 SO libspdk_lvol.so.10.0 00:03:36.251 CC lib/nvmf/ctrlr_bdev.o 00:03:36.251 CC lib/ftl/ftl_io.o 00:03:36.510 SYMLINK libspdk_lvol.so 00:03:36.510 CC lib/scsi/scsi.o 00:03:36.510 SYMLINK libspdk_blobfs.so 00:03:36.510 LIB libspdk_nbd.a 00:03:36.510 CC lib/nvmf/subsystem.o 00:03:36.510 CC lib/scsi/scsi_bdev.o 00:03:36.510 SO libspdk_nbd.so.7.0 00:03:36.510 SYMLINK libspdk_nbd.so 00:03:36.510 CC lib/ftl/ftl_sb.o 00:03:36.510 CC lib/ftl/ftl_l2p.o 00:03:36.510 CC lib/ftl/ftl_l2p_flat.o 00:03:36.769 CC lib/ftl/ftl_nv_cache.o 00:03:36.769 LIB libspdk_ublk.a 00:03:36.769 SO libspdk_ublk.so.3.0 00:03:36.769 CC lib/ftl/ftl_band.o 00:03:36.769 CC lib/scsi/scsi_pr.o 00:03:36.769 CC lib/ftl/ftl_band_ops.o 00:03:36.769 SYMLINK libspdk_ublk.so 00:03:36.769 CC lib/scsi/scsi_rpc.o 00:03:36.769 CC lib/nvmf/nvmf.o 00:03:37.028 CC lib/scsi/task.o 00:03:37.028 CC lib/nvmf/nvmf_rpc.o 00:03:37.028 CC lib/ftl/ftl_writer.o 00:03:37.028 CC lib/ftl/ftl_rq.o 00:03:37.028 CC lib/nvmf/transport.o 00:03:37.028 CC lib/ftl/ftl_reloc.o 00:03:37.287 LIB libspdk_scsi.a 00:03:37.287 SO libspdk_scsi.so.9.0 00:03:37.287 CC lib/ftl/ftl_l2p_cache.o 00:03:37.287 SYMLINK libspdk_scsi.so 00:03:37.287 CC lib/ftl/ftl_p2l.o 00:03:37.560 CC lib/iscsi/conn.o 00:03:37.560 CC lib/vhost/vhost.o 00:03:37.820 CC lib/vhost/vhost_rpc.o 00:03:37.820 CC lib/vhost/vhost_scsi.o 00:03:37.820 CC lib/nvmf/tcp.o 00:03:37.820 CC lib/vhost/vhost_blk.o 00:03:38.079 CC lib/ftl/ftl_p2l_log.o 00:03:38.079 CC lib/iscsi/init_grp.o 00:03:38.079 CC lib/iscsi/iscsi.o 00:03:38.079 CC lib/iscsi/param.o 00:03:38.338 CC lib/nvmf/stubs.o 00:03:38.338 CC lib/vhost/rte_vhost_user.o 00:03:38.338 CC lib/ftl/mngt/ftl_mngt.o 00:03:38.597 CC lib/iscsi/portal_grp.o 00:03:38.597 CC lib/nvmf/mdns_server.o 00:03:38.597 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:38.597 CC lib/iscsi/tgt_node.o 00:03:38.856 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:38.856 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:38.856 CC lib/iscsi/iscsi_subsystem.o 00:03:38.856 CC lib/nvmf/rdma.o 00:03:38.856 CC lib/nvmf/auth.o 00:03:38.856 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:38.856 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:38.856 CC lib/iscsi/iscsi_rpc.o 00:03:39.115 CC lib/iscsi/task.o 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:39.373 LIB libspdk_vhost.a 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:39.373 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:39.373 SO libspdk_vhost.so.8.0 00:03:39.633 SYMLINK libspdk_vhost.so 00:03:39.633 CC lib/ftl/utils/ftl_conf.o 00:03:39.633 CC lib/ftl/utils/ftl_md.o 00:03:39.633 CC lib/ftl/utils/ftl_mempool.o 00:03:39.633 CC lib/ftl/utils/ftl_bitmap.o 00:03:39.633 CC lib/ftl/utils/ftl_property.o 00:03:39.892 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:39.892 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:39.892 LIB libspdk_iscsi.a 00:03:39.892 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:39.892 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:39.892 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:39.892 SO libspdk_iscsi.so.8.0 00:03:39.892 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:39.892 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:40.151 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:40.151 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:40.151 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:40.151 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:40.151 SYMLINK libspdk_iscsi.so 00:03:40.151 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:40.151 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:40.151 CC lib/ftl/base/ftl_base_dev.o 00:03:40.151 CC lib/ftl/base/ftl_base_bdev.o 00:03:40.151 CC lib/ftl/ftl_trace.o 00:03:40.410 LIB libspdk_ftl.a 00:03:40.681 SO libspdk_ftl.so.9.0 00:03:40.943 SYMLINK libspdk_ftl.so 00:03:41.511 LIB libspdk_nvmf.a 00:03:41.778 SO libspdk_nvmf.so.20.0 00:03:42.062 SYMLINK libspdk_nvmf.so 00:03:42.322 CC module/env_dpdk/env_dpdk_rpc.o 00:03:42.322 CC module/scheduler/gscheduler/gscheduler.o 00:03:42.323 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:42.323 CC module/fsdev/aio/fsdev_aio.o 00:03:42.323 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:42.323 CC module/accel/error/accel_error.o 00:03:42.323 CC module/keyring/file/keyring.o 00:03:42.323 CC module/blob/bdev/blob_bdev.o 00:03:42.323 CC module/sock/posix/posix.o 00:03:42.323 CC module/accel/ioat/accel_ioat.o 00:03:42.581 LIB libspdk_env_dpdk_rpc.a 00:03:42.581 SO libspdk_env_dpdk_rpc.so.6.0 00:03:42.581 LIB libspdk_scheduler_gscheduler.a 00:03:42.581 SYMLINK libspdk_env_dpdk_rpc.so 00:03:42.581 CC module/keyring/file/keyring_rpc.o 00:03:42.581 CC module/accel/error/accel_error_rpc.o 00:03:42.581 SO libspdk_scheduler_gscheduler.so.4.0 00:03:42.581 LIB libspdk_scheduler_dpdk_governor.a 00:03:42.581 LIB libspdk_scheduler_dynamic.a 00:03:42.581 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:42.581 CC module/accel/ioat/accel_ioat_rpc.o 00:03:42.581 SO libspdk_scheduler_dynamic.so.4.0 00:03:42.581 SYMLINK libspdk_scheduler_gscheduler.so 00:03:42.581 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:42.841 SYMLINK libspdk_scheduler_dynamic.so 00:03:42.841 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:42.841 LIB libspdk_keyring_file.a 00:03:42.841 LIB libspdk_accel_error.a 00:03:42.841 SO libspdk_keyring_file.so.2.0 00:03:42.841 SO libspdk_accel_error.so.2.0 00:03:42.841 LIB libspdk_blob_bdev.a 00:03:42.841 LIB libspdk_accel_ioat.a 00:03:42.841 SO libspdk_blob_bdev.so.11.0 00:03:42.841 SO libspdk_accel_ioat.so.6.0 00:03:42.841 CC module/keyring/linux/keyring.o 00:03:42.841 SYMLINK libspdk_accel_error.so 00:03:42.841 SYMLINK libspdk_keyring_file.so 00:03:42.841 CC module/accel/dsa/accel_dsa.o 00:03:42.841 CC module/accel/dsa/accel_dsa_rpc.o 00:03:42.841 CC module/fsdev/aio/linux_aio_mgr.o 00:03:42.841 CC module/accel/iaa/accel_iaa.o 00:03:42.841 SYMLINK libspdk_blob_bdev.so 00:03:42.841 CC module/keyring/linux/keyring_rpc.o 00:03:42.841 SYMLINK libspdk_accel_ioat.so 00:03:42.841 CC module/accel/iaa/accel_iaa_rpc.o 00:03:43.100 LIB libspdk_keyring_linux.a 00:03:43.100 LIB libspdk_accel_iaa.a 00:03:43.100 SO libspdk_keyring_linux.so.1.0 00:03:43.100 SO libspdk_accel_iaa.so.3.0 00:03:43.100 CC module/bdev/delay/vbdev_delay.o 00:03:43.100 LIB libspdk_accel_dsa.a 00:03:43.100 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:43.100 CC module/bdev/error/vbdev_error.o 00:03:43.359 SYMLINK libspdk_keyring_linux.so 00:03:43.359 CC module/bdev/gpt/gpt.o 00:03:43.359 SYMLINK libspdk_accel_iaa.so 00:03:43.359 SO libspdk_accel_dsa.so.5.0 00:03:43.359 CC module/bdev/error/vbdev_error_rpc.o 00:03:43.359 CC module/bdev/lvol/vbdev_lvol.o 00:03:43.359 LIB libspdk_fsdev_aio.a 00:03:43.359 SYMLINK libspdk_accel_dsa.so 00:03:43.359 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:43.359 SO libspdk_fsdev_aio.so.1.0 00:03:43.359 CC module/bdev/gpt/vbdev_gpt.o 00:03:43.617 LIB libspdk_sock_posix.a 00:03:43.617 LIB libspdk_bdev_error.a 00:03:43.617 CC module/bdev/malloc/bdev_malloc.o 00:03:43.617 SYMLINK libspdk_fsdev_aio.so 00:03:43.617 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:43.617 SO libspdk_sock_posix.so.6.0 00:03:43.617 SO libspdk_bdev_error.so.6.0 00:03:43.617 LIB libspdk_bdev_delay.a 00:03:43.617 SO libspdk_bdev_delay.so.6.0 00:03:43.617 CC module/bdev/null/bdev_null.o 00:03:43.617 SYMLINK libspdk_sock_posix.so 00:03:43.617 SYMLINK libspdk_bdev_error.so 00:03:43.617 CC module/bdev/nvme/bdev_nvme.o 00:03:43.617 CC module/bdev/null/bdev_null_rpc.o 00:03:43.617 SYMLINK libspdk_bdev_delay.so 00:03:43.876 LIB libspdk_bdev_gpt.a 00:03:43.876 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:43.876 SO libspdk_bdev_gpt.so.6.0 00:03:43.876 SYMLINK libspdk_bdev_gpt.so 00:03:43.876 CC module/bdev/passthru/vbdev_passthru.o 00:03:43.877 CC module/bdev/nvme/nvme_rpc.o 00:03:43.877 CC module/bdev/nvme/bdev_mdns_client.o 00:03:43.877 LIB libspdk_bdev_lvol.a 00:03:43.877 CC module/blobfs/bdev/blobfs_bdev.o 00:03:44.135 SO libspdk_bdev_lvol.so.6.0 00:03:44.135 LIB libspdk_bdev_malloc.a 00:03:44.135 SO libspdk_bdev_malloc.so.6.0 00:03:44.135 CC module/bdev/raid/bdev_raid.o 00:03:44.135 LIB libspdk_bdev_null.a 00:03:44.135 SYMLINK libspdk_bdev_lvol.so 00:03:44.135 CC module/bdev/raid/bdev_raid_rpc.o 00:03:44.135 SO libspdk_bdev_null.so.6.0 00:03:44.135 SYMLINK libspdk_bdev_malloc.so 00:03:44.135 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:44.135 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:44.135 SYMLINK libspdk_bdev_null.so 00:03:44.135 CC module/bdev/nvme/vbdev_opal.o 00:03:44.135 CC module/bdev/raid/bdev_raid_sb.o 00:03:44.393 CC module/bdev/raid/raid0.o 00:03:44.393 LIB libspdk_bdev_passthru.a 00:03:44.393 LIB libspdk_blobfs_bdev.a 00:03:44.393 CC module/bdev/split/vbdev_split.o 00:03:44.393 SO libspdk_bdev_passthru.so.6.0 00:03:44.393 SO libspdk_blobfs_bdev.so.6.0 00:03:44.393 CC module/bdev/split/vbdev_split_rpc.o 00:03:44.393 SYMLINK libspdk_bdev_passthru.so 00:03:44.393 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:44.393 SYMLINK libspdk_blobfs_bdev.so 00:03:44.716 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:44.716 CC module/bdev/raid/raid1.o 00:03:44.716 CC module/bdev/raid/concat.o 00:03:44.716 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:44.716 LIB libspdk_bdev_split.a 00:03:44.716 SO libspdk_bdev_split.so.6.0 00:03:44.716 CC module/bdev/raid/raid5f.o 00:03:44.716 SYMLINK libspdk_bdev_split.so 00:03:44.716 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:44.716 CC module/bdev/aio/bdev_aio.o 00:03:44.716 CC module/bdev/aio/bdev_aio_rpc.o 00:03:44.976 CC module/bdev/ftl/bdev_ftl.o 00:03:44.976 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:44.976 LIB libspdk_bdev_zone_block.a 00:03:44.976 SO libspdk_bdev_zone_block.so.6.0 00:03:44.976 CC module/bdev/iscsi/bdev_iscsi.o 00:03:44.976 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:44.976 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:44.976 SYMLINK libspdk_bdev_zone_block.so 00:03:44.976 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:45.235 LIB libspdk_bdev_aio.a 00:03:45.235 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:45.235 SO libspdk_bdev_aio.so.6.0 00:03:45.235 LIB libspdk_bdev_ftl.a 00:03:45.235 SYMLINK libspdk_bdev_aio.so 00:03:45.235 SO libspdk_bdev_ftl.so.6.0 00:03:45.235 SYMLINK libspdk_bdev_ftl.so 00:03:45.494 LIB libspdk_bdev_raid.a 00:03:45.494 SO libspdk_bdev_raid.so.6.0 00:03:45.494 LIB libspdk_bdev_iscsi.a 00:03:45.494 SO libspdk_bdev_iscsi.so.6.0 00:03:45.494 SYMLINK libspdk_bdev_raid.so 00:03:45.494 SYMLINK libspdk_bdev_iscsi.so 00:03:45.753 LIB libspdk_bdev_virtio.a 00:03:45.753 SO libspdk_bdev_virtio.so.6.0 00:03:45.753 SYMLINK libspdk_bdev_virtio.so 00:03:47.131 LIB libspdk_bdev_nvme.a 00:03:47.131 SO libspdk_bdev_nvme.so.7.1 00:03:47.131 SYMLINK libspdk_bdev_nvme.so 00:03:47.696 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:47.696 CC module/event/subsystems/iobuf/iobuf.o 00:03:47.696 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:47.696 CC module/event/subsystems/sock/sock.o 00:03:47.696 CC module/event/subsystems/keyring/keyring.o 00:03:47.696 CC module/event/subsystems/fsdev/fsdev.o 00:03:47.696 CC module/event/subsystems/vmd/vmd.o 00:03:47.696 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:47.696 CC module/event/subsystems/scheduler/scheduler.o 00:03:47.955 LIB libspdk_event_sock.a 00:03:47.955 LIB libspdk_event_vmd.a 00:03:47.955 LIB libspdk_event_fsdev.a 00:03:47.955 LIB libspdk_event_vhost_blk.a 00:03:47.955 SO libspdk_event_sock.so.5.0 00:03:47.955 SO libspdk_event_fsdev.so.1.0 00:03:47.955 LIB libspdk_event_keyring.a 00:03:47.955 SO libspdk_event_vhost_blk.so.3.0 00:03:47.955 SO libspdk_event_vmd.so.6.0 00:03:47.955 SO libspdk_event_keyring.so.1.0 00:03:47.955 LIB libspdk_event_iobuf.a 00:03:47.955 LIB libspdk_event_scheduler.a 00:03:47.955 SYMLINK libspdk_event_sock.so 00:03:47.955 SYMLINK libspdk_event_fsdev.so 00:03:47.955 SO libspdk_event_scheduler.so.4.0 00:03:47.955 SYMLINK libspdk_event_vhost_blk.so 00:03:47.955 SO libspdk_event_iobuf.so.3.0 00:03:47.955 SYMLINK libspdk_event_keyring.so 00:03:48.218 SYMLINK libspdk_event_vmd.so 00:03:48.218 SYMLINK libspdk_event_scheduler.so 00:03:48.218 SYMLINK libspdk_event_iobuf.so 00:03:48.477 CC module/event/subsystems/accel/accel.o 00:03:48.737 LIB libspdk_event_accel.a 00:03:48.737 SO libspdk_event_accel.so.6.0 00:03:48.737 SYMLINK libspdk_event_accel.so 00:03:48.998 CC module/event/subsystems/bdev/bdev.o 00:03:49.257 LIB libspdk_event_bdev.a 00:03:49.257 SO libspdk_event_bdev.so.6.0 00:03:49.516 SYMLINK libspdk_event_bdev.so 00:03:49.776 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:49.776 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:49.776 CC module/event/subsystems/ublk/ublk.o 00:03:49.776 CC module/event/subsystems/nbd/nbd.o 00:03:49.776 CC module/event/subsystems/scsi/scsi.o 00:03:49.776 LIB libspdk_event_ublk.a 00:03:49.776 LIB libspdk_event_nbd.a 00:03:49.776 SO libspdk_event_ublk.so.3.0 00:03:49.776 SO libspdk_event_nbd.so.6.0 00:03:50.035 LIB libspdk_event_nvmf.a 00:03:50.035 SYMLINK libspdk_event_ublk.so 00:03:50.035 SYMLINK libspdk_event_nbd.so 00:03:50.035 LIB libspdk_event_scsi.a 00:03:50.035 SO libspdk_event_nvmf.so.6.0 00:03:50.035 SO libspdk_event_scsi.so.6.0 00:03:50.035 SYMLINK libspdk_event_scsi.so 00:03:50.035 SYMLINK libspdk_event_nvmf.so 00:03:50.294 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.294 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.553 LIB libspdk_event_vhost_scsi.a 00:03:50.553 LIB libspdk_event_iscsi.a 00:03:50.553 SO libspdk_event_vhost_scsi.so.3.0 00:03:50.553 SO libspdk_event_iscsi.so.6.0 00:03:50.553 SYMLINK libspdk_event_vhost_scsi.so 00:03:50.553 SYMLINK libspdk_event_iscsi.so 00:03:50.812 SO libspdk.so.6.0 00:03:50.812 SYMLINK libspdk.so 00:03:51.071 CC app/spdk_nvme_perf/perf.o 00:03:51.071 CC app/spdk_nvme_identify/identify.o 00:03:51.071 CC app/spdk_lspci/spdk_lspci.o 00:03:51.071 CXX app/trace/trace.o 00:03:51.071 CC app/trace_record/trace_record.o 00:03:51.071 CC app/nvmf_tgt/nvmf_main.o 00:03:51.071 CC app/iscsi_tgt/iscsi_tgt.o 00:03:51.071 CC app/spdk_tgt/spdk_tgt.o 00:03:51.329 CC test/thread/poller_perf/poller_perf.o 00:03:51.329 CC examples/util/zipf/zipf.o 00:03:51.329 LINK spdk_lspci 00:03:51.329 LINK nvmf_tgt 00:03:51.329 LINK poller_perf 00:03:51.329 LINK iscsi_tgt 00:03:51.329 LINK spdk_trace_record 00:03:51.329 LINK zipf 00:03:51.329 LINK spdk_tgt 00:03:51.588 LINK spdk_trace 00:03:51.588 CC app/spdk_nvme_discover/discovery_aer.o 00:03:51.588 CC test/dma/test_dma/test_dma.o 00:03:51.588 CC app/spdk_top/spdk_top.o 00:03:51.847 CC examples/ioat/perf/perf.o 00:03:51.847 CC examples/vmd/lsvmd/lsvmd.o 00:03:51.847 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:51.847 CC examples/idxd/perf/perf.o 00:03:51.847 CC examples/vmd/led/led.o 00:03:51.847 LINK spdk_nvme_discover 00:03:51.847 LINK lsvmd 00:03:52.105 LINK interrupt_tgt 00:03:52.105 LINK led 00:03:52.105 LINK ioat_perf 00:03:52.105 LINK spdk_nvme_identify 00:03:52.105 LINK test_dma 00:03:52.364 LINK idxd_perf 00:03:52.364 LINK spdk_nvme_perf 00:03:52.364 TEST_HEADER include/spdk/accel.h 00:03:52.364 TEST_HEADER include/spdk/accel_module.h 00:03:52.364 TEST_HEADER include/spdk/assert.h 00:03:52.364 TEST_HEADER include/spdk/barrier.h 00:03:52.364 TEST_HEADER include/spdk/base64.h 00:03:52.364 TEST_HEADER include/spdk/bdev.h 00:03:52.364 TEST_HEADER include/spdk/bdev_module.h 00:03:52.364 TEST_HEADER include/spdk/bdev_zone.h 00:03:52.364 TEST_HEADER include/spdk/bit_array.h 00:03:52.364 TEST_HEADER include/spdk/bit_pool.h 00:03:52.364 TEST_HEADER include/spdk/blob_bdev.h 00:03:52.364 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:52.364 CC examples/sock/hello_world/hello_sock.o 00:03:52.364 TEST_HEADER include/spdk/blobfs.h 00:03:52.364 TEST_HEADER include/spdk/blob.h 00:03:52.364 TEST_HEADER include/spdk/conf.h 00:03:52.364 TEST_HEADER include/spdk/config.h 00:03:52.364 TEST_HEADER include/spdk/cpuset.h 00:03:52.364 TEST_HEADER include/spdk/crc16.h 00:03:52.364 TEST_HEADER include/spdk/crc32.h 00:03:52.364 TEST_HEADER include/spdk/crc64.h 00:03:52.364 TEST_HEADER include/spdk/dif.h 00:03:52.364 TEST_HEADER include/spdk/dma.h 00:03:52.364 TEST_HEADER include/spdk/endian.h 00:03:52.364 TEST_HEADER include/spdk/env_dpdk.h 00:03:52.364 CC examples/thread/thread/thread_ex.o 00:03:52.364 TEST_HEADER include/spdk/env.h 00:03:52.364 TEST_HEADER include/spdk/event.h 00:03:52.364 TEST_HEADER include/spdk/fd_group.h 00:03:52.364 TEST_HEADER include/spdk/fd.h 00:03:52.364 TEST_HEADER include/spdk/file.h 00:03:52.364 TEST_HEADER include/spdk/fsdev.h 00:03:52.364 TEST_HEADER include/spdk/fsdev_module.h 00:03:52.364 TEST_HEADER include/spdk/ftl.h 00:03:52.364 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:52.364 TEST_HEADER include/spdk/gpt_spec.h 00:03:52.364 TEST_HEADER include/spdk/hexlify.h 00:03:52.364 TEST_HEADER include/spdk/histogram_data.h 00:03:52.364 TEST_HEADER include/spdk/idxd.h 00:03:52.364 TEST_HEADER include/spdk/idxd_spec.h 00:03:52.364 TEST_HEADER include/spdk/init.h 00:03:52.364 TEST_HEADER include/spdk/ioat.h 00:03:52.364 TEST_HEADER include/spdk/ioat_spec.h 00:03:52.364 TEST_HEADER include/spdk/iscsi_spec.h 00:03:52.364 TEST_HEADER include/spdk/json.h 00:03:52.364 TEST_HEADER include/spdk/jsonrpc.h 00:03:52.364 TEST_HEADER include/spdk/keyring.h 00:03:52.364 TEST_HEADER include/spdk/keyring_module.h 00:03:52.364 TEST_HEADER include/spdk/likely.h 00:03:52.364 TEST_HEADER include/spdk/log.h 00:03:52.364 TEST_HEADER include/spdk/lvol.h 00:03:52.364 TEST_HEADER include/spdk/md5.h 00:03:52.364 TEST_HEADER include/spdk/memory.h 00:03:52.364 TEST_HEADER include/spdk/mmio.h 00:03:52.364 CC examples/ioat/verify/verify.o 00:03:52.364 TEST_HEADER include/spdk/nbd.h 00:03:52.364 TEST_HEADER include/spdk/net.h 00:03:52.364 TEST_HEADER include/spdk/notify.h 00:03:52.364 TEST_HEADER include/spdk/nvme.h 00:03:52.364 TEST_HEADER include/spdk/nvme_intel.h 00:03:52.364 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:52.364 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:52.364 TEST_HEADER include/spdk/nvme_spec.h 00:03:52.364 TEST_HEADER include/spdk/nvme_zns.h 00:03:52.364 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:52.364 CC test/app/bdev_svc/bdev_svc.o 00:03:52.364 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:52.364 TEST_HEADER include/spdk/nvmf.h 00:03:52.623 TEST_HEADER include/spdk/nvmf_spec.h 00:03:52.623 TEST_HEADER include/spdk/nvmf_transport.h 00:03:52.623 TEST_HEADER include/spdk/opal.h 00:03:52.623 TEST_HEADER include/spdk/opal_spec.h 00:03:52.623 TEST_HEADER include/spdk/pci_ids.h 00:03:52.623 TEST_HEADER include/spdk/pipe.h 00:03:52.623 TEST_HEADER include/spdk/queue.h 00:03:52.623 TEST_HEADER include/spdk/reduce.h 00:03:52.623 TEST_HEADER include/spdk/rpc.h 00:03:52.623 TEST_HEADER include/spdk/scheduler.h 00:03:52.623 TEST_HEADER include/spdk/scsi.h 00:03:52.623 TEST_HEADER include/spdk/scsi_spec.h 00:03:52.623 TEST_HEADER include/spdk/sock.h 00:03:52.623 TEST_HEADER include/spdk/stdinc.h 00:03:52.623 TEST_HEADER include/spdk/string.h 00:03:52.623 TEST_HEADER include/spdk/thread.h 00:03:52.623 TEST_HEADER include/spdk/trace.h 00:03:52.623 TEST_HEADER include/spdk/trace_parser.h 00:03:52.623 TEST_HEADER include/spdk/tree.h 00:03:52.623 TEST_HEADER include/spdk/ublk.h 00:03:52.623 TEST_HEADER include/spdk/util.h 00:03:52.623 TEST_HEADER include/spdk/uuid.h 00:03:52.623 TEST_HEADER include/spdk/version.h 00:03:52.623 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:52.623 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:52.623 TEST_HEADER include/spdk/vhost.h 00:03:52.623 TEST_HEADER include/spdk/vmd.h 00:03:52.623 TEST_HEADER include/spdk/xor.h 00:03:52.623 TEST_HEADER include/spdk/zipf.h 00:03:52.623 CXX test/cpp_headers/accel.o 00:03:52.623 CC app/spdk_dd/spdk_dd.o 00:03:52.623 CC test/event/event_perf/event_perf.o 00:03:52.623 LINK bdev_svc 00:03:52.623 CC test/env/vtophys/vtophys.o 00:03:52.623 LINK verify 00:03:52.623 LINK thread 00:03:52.623 CXX test/cpp_headers/accel_module.o 00:03:52.623 CC test/env/mem_callbacks/mem_callbacks.o 00:03:52.882 LINK hello_sock 00:03:52.882 LINK event_perf 00:03:52.882 LINK vtophys 00:03:52.882 LINK spdk_top 00:03:52.882 CXX test/cpp_headers/assert.o 00:03:53.142 CC test/app/histogram_perf/histogram_perf.o 00:03:53.142 LINK spdk_dd 00:03:53.142 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:53.142 CC test/event/reactor/reactor.o 00:03:53.142 CC test/nvme/aer/aer.o 00:03:53.142 CXX test/cpp_headers/barrier.o 00:03:53.142 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:53.142 LINK histogram_perf 00:03:53.142 CC examples/nvme/hello_world/hello_world.o 00:03:53.142 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:53.142 LINK reactor 00:03:53.402 CXX test/cpp_headers/base64.o 00:03:53.402 LINK mem_callbacks 00:03:53.402 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:53.402 LINK aer 00:03:53.402 LINK hello_world 00:03:53.402 CC app/fio/nvme/fio_plugin.o 00:03:53.402 CC test/event/reactor_perf/reactor_perf.o 00:03:53.663 CXX test/cpp_headers/bdev.o 00:03:53.663 LINK nvme_fuzz 00:03:53.663 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:53.663 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:53.663 LINK reactor_perf 00:03:53.663 CXX test/cpp_headers/bdev_module.o 00:03:53.663 CC test/nvme/reset/reset.o 00:03:53.922 LINK env_dpdk_post_init 00:03:53.922 CC examples/nvme/reconnect/reconnect.o 00:03:53.922 CC test/nvme/sgl/sgl.o 00:03:53.922 LINK vhost_fuzz 00:03:53.922 CXX test/cpp_headers/bdev_zone.o 00:03:53.922 CC test/event/app_repeat/app_repeat.o 00:03:53.922 LINK hello_fsdev 00:03:54.185 LINK reset 00:03:54.185 CC test/env/memory/memory_ut.o 00:03:54.185 CXX test/cpp_headers/bit_array.o 00:03:54.185 LINK sgl 00:03:54.185 CC test/env/pci/pci_ut.o 00:03:54.185 LINK app_repeat 00:03:54.185 CXX test/cpp_headers/bit_pool.o 00:03:54.185 LINK spdk_nvme 00:03:54.185 LINK reconnect 00:03:54.454 CC test/app/jsoncat/jsoncat.o 00:03:54.454 CXX test/cpp_headers/blob_bdev.o 00:03:54.454 CC test/nvme/e2edp/nvme_dp.o 00:03:54.454 CC examples/accel/perf/accel_perf.o 00:03:54.454 LINK jsoncat 00:03:54.454 CC app/fio/bdev/fio_plugin.o 00:03:54.454 CC test/event/scheduler/scheduler.o 00:03:54.713 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.713 CXX test/cpp_headers/blobfs_bdev.o 00:03:54.713 CXX test/cpp_headers/blobfs.o 00:03:54.713 LINK pci_ut 00:03:54.713 LINK nvme_dp 00:03:54.713 LINK scheduler 00:03:54.972 CXX test/cpp_headers/blob.o 00:03:54.972 CC app/vhost/vhost.o 00:03:54.972 CXX test/cpp_headers/conf.o 00:03:54.972 CXX test/cpp_headers/config.o 00:03:54.972 LINK accel_perf 00:03:54.972 CC test/nvme/overhead/overhead.o 00:03:55.232 CXX test/cpp_headers/cpuset.o 00:03:55.232 CC test/app/stub/stub.o 00:03:55.232 LINK vhost 00:03:55.232 CC test/rpc_client/rpc_client_test.o 00:03:55.232 LINK spdk_bdev 00:03:55.232 CXX test/cpp_headers/crc16.o 00:03:55.232 CXX test/cpp_headers/crc32.o 00:03:55.232 LINK stub 00:03:55.492 LINK rpc_client_test 00:03:55.492 CC examples/nvme/arbitration/arbitration.o 00:03:55.492 LINK nvme_manage 00:03:55.492 LINK memory_ut 00:03:55.492 LINK overhead 00:03:55.492 CC examples/nvme/hotplug/hotplug.o 00:03:55.492 CXX test/cpp_headers/crc64.o 00:03:55.492 LINK iscsi_fuzz 00:03:55.492 CXX test/cpp_headers/dif.o 00:03:55.492 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:55.492 CXX test/cpp_headers/dma.o 00:03:55.751 CC test/nvme/reserve/reserve.o 00:03:55.751 LINK cmb_copy 00:03:55.751 CC test/nvme/startup/startup.o 00:03:55.751 CC test/nvme/err_injection/err_injection.o 00:03:55.751 LINK hotplug 00:03:55.751 CXX test/cpp_headers/endian.o 00:03:55.751 LINK arbitration 00:03:55.751 CC examples/blob/hello_world/hello_blob.o 00:03:55.751 CC examples/nvme/abort/abort.o 00:03:55.751 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:56.009 LINK startup 00:03:56.009 CXX test/cpp_headers/env_dpdk.o 00:03:56.009 CXX test/cpp_headers/env.o 00:03:56.009 LINK err_injection 00:03:56.009 LINK reserve 00:03:56.009 LINK pmr_persistence 00:03:56.009 LINK hello_blob 00:03:56.009 CC test/nvme/simple_copy/simple_copy.o 00:03:56.268 CXX test/cpp_headers/event.o 00:03:56.268 CC test/accel/dif/dif.o 00:03:56.268 CXX test/cpp_headers/fd_group.o 00:03:56.268 CXX test/cpp_headers/fd.o 00:03:56.268 CC test/nvme/connect_stress/connect_stress.o 00:03:56.268 CXX test/cpp_headers/file.o 00:03:56.268 LINK abort 00:03:56.268 CC test/blobfs/mkfs/mkfs.o 00:03:56.268 LINK simple_copy 00:03:56.268 CXX test/cpp_headers/fsdev.o 00:03:56.527 CC examples/blob/cli/blobcli.o 00:03:56.527 CXX test/cpp_headers/fsdev_module.o 00:03:56.527 CXX test/cpp_headers/ftl.o 00:03:56.527 LINK connect_stress 00:03:56.527 LINK mkfs 00:03:56.786 CC test/nvme/boot_partition/boot_partition.o 00:03:56.786 CC examples/bdev/hello_world/hello_bdev.o 00:03:56.786 CXX test/cpp_headers/fuse_dispatcher.o 00:03:56.786 CC test/nvme/compliance/nvme_compliance.o 00:03:56.786 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:56.786 CC test/nvme/fused_ordering/fused_ordering.o 00:03:56.786 CXX test/cpp_headers/gpt_spec.o 00:03:56.786 CC test/lvol/esnap/esnap.o 00:03:56.786 LINK boot_partition 00:03:57.045 LINK hello_bdev 00:03:57.045 CXX test/cpp_headers/hexlify.o 00:03:57.045 LINK dif 00:03:57.045 CXX test/cpp_headers/histogram_data.o 00:03:57.045 LINK doorbell_aers 00:03:57.303 LINK fused_ordering 00:03:57.303 CC examples/bdev/bdevperf/bdevperf.o 00:03:57.303 LINK nvme_compliance 00:03:57.303 CXX test/cpp_headers/idxd.o 00:03:57.303 CXX test/cpp_headers/idxd_spec.o 00:03:57.303 LINK blobcli 00:03:57.303 CC test/nvme/fdp/fdp.o 00:03:57.303 CC test/nvme/cuse/cuse.o 00:03:57.303 CXX test/cpp_headers/init.o 00:03:57.303 CXX test/cpp_headers/ioat.o 00:03:57.562 CXX test/cpp_headers/ioat_spec.o 00:03:57.562 CXX test/cpp_headers/iscsi_spec.o 00:03:57.562 CXX test/cpp_headers/json.o 00:03:57.562 CXX test/cpp_headers/jsonrpc.o 00:03:57.562 CXX test/cpp_headers/keyring.o 00:03:57.562 CC test/bdev/bdevio/bdevio.o 00:03:57.562 CXX test/cpp_headers/keyring_module.o 00:03:57.821 CXX test/cpp_headers/likely.o 00:03:57.821 CXX test/cpp_headers/log.o 00:03:57.821 CXX test/cpp_headers/lvol.o 00:03:57.821 CXX test/cpp_headers/md5.o 00:03:57.821 LINK fdp 00:03:57.821 CXX test/cpp_headers/memory.o 00:03:57.821 CXX test/cpp_headers/mmio.o 00:03:57.821 CXX test/cpp_headers/nbd.o 00:03:57.821 CXX test/cpp_headers/net.o 00:03:57.821 CXX test/cpp_headers/notify.o 00:03:58.079 CXX test/cpp_headers/nvme.o 00:03:58.079 CXX test/cpp_headers/nvme_intel.o 00:03:58.079 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.079 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.079 CXX test/cpp_headers/nvme_spec.o 00:03:58.079 CXX test/cpp_headers/nvme_zns.o 00:03:58.079 LINK bdevio 00:03:58.079 CXX test/cpp_headers/nvmf_cmd.o 00:03:58.079 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:58.338 CXX test/cpp_headers/nvmf.o 00:03:58.338 CXX test/cpp_headers/nvmf_spec.o 00:03:58.338 CXX test/cpp_headers/nvmf_transport.o 00:03:58.338 CXX test/cpp_headers/opal.o 00:03:58.338 LINK bdevperf 00:03:58.338 CXX test/cpp_headers/opal_spec.o 00:03:58.338 CXX test/cpp_headers/pci_ids.o 00:03:58.338 CXX test/cpp_headers/pipe.o 00:03:58.338 CXX test/cpp_headers/queue.o 00:03:58.597 CXX test/cpp_headers/reduce.o 00:03:58.597 CXX test/cpp_headers/rpc.o 00:03:58.597 CXX test/cpp_headers/scheduler.o 00:03:58.597 CXX test/cpp_headers/scsi.o 00:03:58.597 CXX test/cpp_headers/scsi_spec.o 00:03:58.597 CXX test/cpp_headers/sock.o 00:03:58.597 CXX test/cpp_headers/stdinc.o 00:03:58.597 CXX test/cpp_headers/string.o 00:03:58.855 CXX test/cpp_headers/thread.o 00:03:58.855 CXX test/cpp_headers/trace.o 00:03:58.855 CXX test/cpp_headers/trace_parser.o 00:03:58.855 CXX test/cpp_headers/tree.o 00:03:58.855 CXX test/cpp_headers/ublk.o 00:03:58.855 CXX test/cpp_headers/util.o 00:03:58.855 CXX test/cpp_headers/uuid.o 00:03:58.855 CXX test/cpp_headers/version.o 00:03:58.855 CC examples/nvmf/nvmf/nvmf.o 00:03:58.855 CXX test/cpp_headers/vfio_user_pci.o 00:03:58.855 CXX test/cpp_headers/vfio_user_spec.o 00:03:58.855 CXX test/cpp_headers/vhost.o 00:03:58.855 CXX test/cpp_headers/vmd.o 00:03:58.855 CXX test/cpp_headers/xor.o 00:03:59.114 CXX test/cpp_headers/zipf.o 00:03:59.114 LINK nvmf 00:03:59.374 LINK cuse 00:04:04.647 LINK esnap 00:04:04.647 00:04:04.647 real 1m32.918s 00:04:04.647 user 8m15.193s 00:04:04.647 sys 1m38.943s 00:04:04.647 10:27:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:04.647 10:27:07 make -- common/autotest_common.sh@10 -- $ set +x 00:04:04.647 ************************************ 00:04:04.647 END TEST make 00:04:04.647 ************************************ 00:04:04.647 10:27:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:04.647 10:27:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:04.647 10:27:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:04.647 10:27:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.647 10:27:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:04.647 10:27:07 -- pm/common@44 -- $ pid=5468 00:04:04.647 10:27:07 -- pm/common@50 -- $ kill -TERM 5468 00:04:04.647 10:27:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.647 10:27:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:04.647 10:27:07 -- pm/common@44 -- $ pid=5470 00:04:04.647 10:27:07 -- pm/common@50 -- $ kill -TERM 5470 00:04:04.647 10:27:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:04.647 10:27:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:04.647 10:27:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.647 10:27:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.647 10:27:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.647 10:27:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.647 10:27:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.647 10:27:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.647 10:27:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.647 10:27:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.647 10:27:07 -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.647 10:27:07 -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.647 10:27:07 -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.647 10:27:07 -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.647 10:27:07 -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.647 10:27:07 -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.647 10:27:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.647 10:27:07 -- scripts/common.sh@344 -- # case "$op" in 00:04:04.647 10:27:07 -- scripts/common.sh@345 -- # : 1 00:04:04.647 10:27:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.647 10:27:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.647 10:27:07 -- scripts/common.sh@365 -- # decimal 1 00:04:04.647 10:27:07 -- scripts/common.sh@353 -- # local d=1 00:04:04.647 10:27:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.647 10:27:07 -- scripts/common.sh@355 -- # echo 1 00:04:04.647 10:27:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.647 10:27:07 -- scripts/common.sh@366 -- # decimal 2 00:04:04.647 10:27:07 -- scripts/common.sh@353 -- # local d=2 00:04:04.647 10:27:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.647 10:27:07 -- scripts/common.sh@355 -- # echo 2 00:04:04.647 10:27:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.647 10:27:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.647 10:27:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.647 10:27:07 -- scripts/common.sh@368 -- # return 0 00:04:04.647 10:27:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.647 10:27:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.647 --rc genhtml_branch_coverage=1 00:04:04.647 --rc genhtml_function_coverage=1 00:04:04.647 --rc genhtml_legend=1 00:04:04.647 --rc geninfo_all_blocks=1 00:04:04.647 --rc geninfo_unexecuted_blocks=1 00:04:04.647 00:04:04.647 ' 00:04:04.647 10:27:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.647 --rc genhtml_branch_coverage=1 00:04:04.647 --rc genhtml_function_coverage=1 00:04:04.647 --rc genhtml_legend=1 00:04:04.647 --rc geninfo_all_blocks=1 00:04:04.647 --rc geninfo_unexecuted_blocks=1 00:04:04.647 00:04:04.647 ' 00:04:04.647 10:27:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.647 --rc genhtml_branch_coverage=1 00:04:04.647 --rc genhtml_function_coverage=1 00:04:04.647 --rc genhtml_legend=1 00:04:04.647 --rc geninfo_all_blocks=1 00:04:04.647 --rc geninfo_unexecuted_blocks=1 00:04:04.647 00:04:04.647 ' 00:04:04.647 10:27:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.648 --rc genhtml_branch_coverage=1 00:04:04.648 --rc genhtml_function_coverage=1 00:04:04.648 --rc genhtml_legend=1 00:04:04.648 --rc geninfo_all_blocks=1 00:04:04.648 --rc geninfo_unexecuted_blocks=1 00:04:04.648 00:04:04.648 ' 00:04:04.648 10:27:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:04.648 10:27:07 -- nvmf/common.sh@7 -- # uname -s 00:04:04.648 10:27:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.648 10:27:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.648 10:27:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.648 10:27:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.648 10:27:07 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.648 10:27:07 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:04.648 10:27:07 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.648 10:27:07 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:04.648 10:27:07 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:be5e0f63-c4b2-4a21-a7e7-d50ddb8f0bf8 00:04:04.648 10:27:07 -- nvmf/common.sh@16 -- # NVME_HOSTID=be5e0f63-c4b2-4a21-a7e7-d50ddb8f0bf8 00:04:04.648 10:27:07 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.648 10:27:07 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:04.648 10:27:07 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:04.648 10:27:07 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.648 10:27:07 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.648 10:27:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.648 10:27:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.648 10:27:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.648 10:27:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.648 10:27:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.648 10:27:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.648 10:27:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.648 10:27:07 -- paths/export.sh@5 -- # export PATH 00:04:04.648 10:27:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.648 10:27:07 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:04.648 10:27:07 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:04.648 10:27:07 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:04.648 10:27:07 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:04.648 10:27:07 -- nvmf/common.sh@50 -- # : 0 00:04:04.648 10:27:07 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:04.648 10:27:07 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:04.648 10:27:07 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:04.648 10:27:07 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.648 10:27:07 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.648 10:27:07 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:04.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:04.648 10:27:07 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:04.648 10:27:07 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:04.648 10:27:07 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:04.648 10:27:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:04.648 10:27:07 -- spdk/autotest.sh@32 -- # uname -s 00:04:04.648 10:27:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:04.648 10:27:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:04.648 10:27:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.648 10:27:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:04.648 10:27:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.648 10:27:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:04.648 10:27:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:04.648 10:27:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:04.648 10:27:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54506 00:04:04.648 10:27:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:04.648 10:27:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:04.648 10:27:07 -- pm/common@17 -- # local monitor 00:04:04.648 10:27:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.648 10:27:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.648 10:27:07 -- pm/common@21 -- # date +%s 00:04:04.648 10:27:07 -- pm/common@25 -- # sleep 1 00:04:04.648 10:27:07 -- pm/common@21 -- # date +%s 00:04:04.648 10:27:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732098427 00:04:04.648 10:27:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732098427 00:04:04.648 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732098427_collect-cpu-load.pm.log 00:04:04.648 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732098427_collect-vmstat.pm.log 00:04:05.583 10:27:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:05.583 10:27:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:05.583 10:27:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.583 10:27:08 -- common/autotest_common.sh@10 -- # set +x 00:04:05.583 10:27:08 -- spdk/autotest.sh@59 -- # create_test_list 00:04:05.583 10:27:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:05.583 10:27:08 -- common/autotest_common.sh@10 -- # set +x 00:04:05.583 10:27:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:05.583 10:27:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:05.583 10:27:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:05.583 10:27:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:05.583 10:27:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:05.583 10:27:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:05.583 10:27:08 -- common/autotest_common.sh@1457 -- # uname 00:04:05.583 10:27:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:05.583 10:27:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:05.583 10:27:08 -- common/autotest_common.sh@1477 -- # uname 00:04:05.583 10:27:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:05.583 10:27:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:05.583 10:27:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:05.925 lcov: LCOV version 1.15 00:04:05.925 10:27:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:24.004 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:24.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:38.891 10:27:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:38.892 10:27:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.892 10:27:40 -- common/autotest_common.sh@10 -- # set +x 00:04:38.892 10:27:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:38.892 10:27:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.892 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:38.892 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:38.892 10:27:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:38.892 10:27:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:38.892 10:27:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:38.892 10:27:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:38.892 10:27:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:38.892 10:27:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:38.892 10:27:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:38.892 10:27:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:38.892 10:27:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:38.892 10:27:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:38.892 10:27:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:38.892 10:27:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:38.892 10:27:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:38.892 10:27:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:38.892 10:27:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:38.892 10:27:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:38.892 10:27:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:38.892 10:27:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:38.892 10:27:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:38.892 10:27:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.892 10:27:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.892 10:27:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:38.892 10:27:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:38.892 10:27:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:38.892 No valid GPT data, bailing 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # pt= 00:04:38.892 10:27:41 -- scripts/common.sh@395 -- # return 1 00:04:38.892 10:27:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:38.892 1+0 records in 00:04:38.892 1+0 records out 00:04:38.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402225 s, 261 MB/s 00:04:38.892 10:27:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.892 10:27:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.892 10:27:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:38.892 10:27:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:38.892 10:27:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:38.892 No valid GPT data, bailing 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # pt= 00:04:38.892 10:27:41 -- scripts/common.sh@395 -- # return 1 00:04:38.892 10:27:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:38.892 1+0 records in 00:04:38.892 1+0 records out 00:04:38.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605544 s, 173 MB/s 00:04:38.892 10:27:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.892 10:27:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.892 10:27:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:38.892 10:27:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:38.892 10:27:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:38.892 No valid GPT data, bailing 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # pt= 00:04:38.892 10:27:41 -- scripts/common.sh@395 -- # return 1 00:04:38.892 10:27:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:38.892 1+0 records in 00:04:38.892 1+0 records out 00:04:38.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041714 s, 251 MB/s 00:04:38.892 10:27:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.892 10:27:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.892 10:27:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:38.892 10:27:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:38.892 10:27:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:38.892 No valid GPT data, bailing 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:38.892 10:27:41 -- scripts/common.sh@394 -- # pt= 00:04:38.892 10:27:41 -- scripts/common.sh@395 -- # return 1 00:04:38.892 10:27:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:38.892 1+0 records in 00:04:38.892 1+0 records out 00:04:38.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432938 s, 242 MB/s 00:04:38.892 10:27:41 -- spdk/autotest.sh@105 -- # sync 00:04:38.892 10:27:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:38.892 10:27:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:38.892 10:27:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.427 10:27:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:41.427 10:27:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:41.427 10:27:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:41.427 10:27:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.994 Hugepages 00:04:41.994 node hugesize free / total 00:04:41.994 node0 1048576kB 0 / 0 00:04:41.994 node0 2048kB 0 / 0 00:04:41.994 00:04:41.994 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.994 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:42.253 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:42.253 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:42.253 10:27:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:42.253 10:27:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:42.253 10:27:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:42.253 10:27:45 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.190 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.190 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.190 10:27:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:44.128 10:27:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:44.128 10:27:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:44.128 10:27:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.129 10:27:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:44.129 10:27:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:44.129 10:27:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:44.129 10:27:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.129 10:27:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:44.129 10:27:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:44.389 10:27:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:44.389 10:27:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:44.389 10:27:47 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.715 Waiting for block devices as requested 00:04:44.976 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.976 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.976 10:27:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.976 10:27:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:44.976 10:27:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:44.976 10:27:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.976 10:27:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:44.976 10:27:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1543 -- # continue 00:04:44.976 10:27:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.976 10:27:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:44.976 10:27:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:44.976 10:27:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.976 10:27:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:44.976 10:27:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.976 10:27:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:44.976 10:27:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:45.236 10:27:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:45.236 10:27:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:45.236 10:27:48 -- common/autotest_common.sh@1543 -- # continue 00:04:45.236 10:27:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:45.236 10:27:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.236 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:04:45.236 10:27:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:45.236 10:27:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.236 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:04:45.236 10:27:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.174 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.174 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.174 10:27:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:46.174 10:27:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.174 10:27:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.174 10:27:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:46.174 10:27:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:46.174 10:27:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.174 10:27:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:46.174 10:27:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:46.174 10:27:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:46.174 10:27:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:46.174 10:27:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:46.174 10:27:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:46.174 10:27:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:46.174 10:27:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.174 10:27:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.174 10:27:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:46.433 10:27:49 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:46.433 10:27:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:46.433 10:27:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:46.433 10:27:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:46.433 10:27:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:46.433 10:27:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.433 10:27:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:46.433 10:27:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:46.433 10:27:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:46.433 10:27:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.433 10:27:49 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:46.433 10:27:49 -- common/autotest_common.sh@1572 -- # return 0 00:04:46.433 10:27:49 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:46.433 10:27:49 -- common/autotest_common.sh@1580 -- # return 0 00:04:46.433 10:27:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:46.433 10:27:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:46.433 10:27:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:46.433 10:27:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:46.433 10:27:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:46.433 10:27:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.433 10:27:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 10:27:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:46.433 10:27:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.433 10:27:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.433 10:27:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.433 10:27:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 ************************************ 00:04:46.433 START TEST env 00:04:46.433 ************************************ 00:04:46.433 10:27:49 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.433 * Looking for test storage... 00:04:46.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.433 10:27:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.433 10:27:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.434 10:27:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.693 10:27:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.693 10:27:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.693 10:27:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.693 10:27:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.693 10:27:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.693 10:27:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.693 10:27:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.693 10:27:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.693 10:27:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.693 10:27:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.693 10:27:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.693 10:27:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.693 10:27:49 env -- scripts/common.sh@344 -- # case "$op" in 00:04:46.693 10:27:49 env -- scripts/common.sh@345 -- # : 1 00:04:46.693 10:27:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.693 10:27:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.693 10:27:49 env -- scripts/common.sh@365 -- # decimal 1 00:04:46.693 10:27:49 env -- scripts/common.sh@353 -- # local d=1 00:04:46.693 10:27:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.693 10:27:49 env -- scripts/common.sh@355 -- # echo 1 00:04:46.693 10:27:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.693 10:27:49 env -- scripts/common.sh@366 -- # decimal 2 00:04:46.693 10:27:49 env -- scripts/common.sh@353 -- # local d=2 00:04:46.693 10:27:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.693 10:27:49 env -- scripts/common.sh@355 -- # echo 2 00:04:46.693 10:27:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.693 10:27:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.693 10:27:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.693 10:27:49 env -- scripts/common.sh@368 -- # return 0 00:04:46.693 10:27:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.693 10:27:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.693 --rc genhtml_branch_coverage=1 00:04:46.693 --rc genhtml_function_coverage=1 00:04:46.693 --rc genhtml_legend=1 00:04:46.693 --rc geninfo_all_blocks=1 00:04:46.693 --rc geninfo_unexecuted_blocks=1 00:04:46.693 00:04:46.693 ' 00:04:46.693 10:27:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.693 --rc genhtml_branch_coverage=1 00:04:46.693 --rc genhtml_function_coverage=1 00:04:46.693 --rc genhtml_legend=1 00:04:46.693 --rc geninfo_all_blocks=1 00:04:46.693 --rc geninfo_unexecuted_blocks=1 00:04:46.693 00:04:46.693 ' 00:04:46.693 10:27:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.693 --rc genhtml_branch_coverage=1 00:04:46.693 --rc genhtml_function_coverage=1 00:04:46.693 --rc genhtml_legend=1 00:04:46.693 --rc geninfo_all_blocks=1 00:04:46.693 --rc geninfo_unexecuted_blocks=1 00:04:46.693 00:04:46.693 ' 00:04:46.693 10:27:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.693 --rc genhtml_branch_coverage=1 00:04:46.693 --rc genhtml_function_coverage=1 00:04:46.693 --rc genhtml_legend=1 00:04:46.693 --rc geninfo_all_blocks=1 00:04:46.694 --rc geninfo_unexecuted_blocks=1 00:04:46.694 00:04:46.694 ' 00:04:46.694 10:27:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.694 10:27:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.694 10:27:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.694 10:27:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.694 ************************************ 00:04:46.694 START TEST env_memory 00:04:46.694 ************************************ 00:04:46.694 10:27:49 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.694 00:04:46.694 00:04:46.694 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.694 http://cunit.sourceforge.net/ 00:04:46.694 00:04:46.694 00:04:46.694 Suite: memory 00:04:46.694 Test: alloc and free memory map ...[2024-11-20 10:27:50.041717] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.694 passed 00:04:46.694 Test: mem map translation ...[2024-11-20 10:27:50.094096] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.694 [2024-11-20 10:27:50.094255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.694 [2024-11-20 10:27:50.094370] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.694 [2024-11-20 10:27:50.094409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.953 passed 00:04:46.953 Test: mem map registration ...[2024-11-20 10:27:50.187267] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:46.953 [2024-11-20 10:27:50.187385] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:46.953 passed 00:04:46.953 Test: mem map adjacent registrations ...passed 00:04:46.953 00:04:46.953 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.953 suites 1 1 n/a 0 0 00:04:46.953 tests 4 4 4 0 0 00:04:46.953 asserts 152 152 152 0 n/a 00:04:46.953 00:04:46.953 Elapsed time = 0.286 seconds 00:04:46.953 00:04:46.953 real 0m0.340s 00:04:46.953 user 0m0.297s 00:04:46.953 sys 0m0.033s 00:04:46.953 10:27:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.953 10:27:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 END TEST env_memory 00:04:46.953 ************************************ 00:04:46.953 10:27:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.953 10:27:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.953 10:27:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.953 10:27:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 START TEST env_vtophys 00:04:46.953 ************************************ 00:04:46.953 10:27:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.953 EAL: lib.eal log level changed from notice to debug 00:04:46.953 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.953 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.953 EAL: Maximum logical cores by configuration: 128 00:04:46.953 EAL: Detected CPU lcores: 10 00:04:46.953 EAL: Detected NUMA nodes: 1 00:04:46.953 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.953 EAL: Detected shared linkage of DPDK 00:04:47.212 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.212 EAL: Selected IOVA mode 'PA' 00:04:47.212 EAL: Probing VFIO support... 00:04:47.212 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.212 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:47.212 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.212 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.212 EAL: Setting up physically contiguous memory... 00:04:47.212 EAL: Setting maximum number of open files to 524288 00:04:47.212 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.212 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.212 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.212 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.212 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.212 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.212 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.212 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.212 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.212 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.212 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.212 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.212 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.212 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.212 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.212 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.212 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.212 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.212 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.212 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.212 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.212 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.212 EAL: Hugepages will be freed exactly as allocated. 00:04:47.212 EAL: No shared files mode enabled, IPC is disabled 00:04:47.212 EAL: No shared files mode enabled, IPC is disabled 00:04:47.212 EAL: TSC frequency is ~2290000 KHz 00:04:47.212 EAL: Main lcore 0 is ready (tid=7f7668621a40;cpuset=[0]) 00:04:47.212 EAL: Trying to obtain current memory policy. 00:04:47.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.212 EAL: Restoring previous memory policy: 0 00:04:47.212 EAL: request: mp_malloc_sync 00:04:47.212 EAL: No shared files mode enabled, IPC is disabled 00:04:47.212 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.212 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.212 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.212 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.212 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.212 00:04:47.212 00:04:47.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.212 http://cunit.sourceforge.net/ 00:04:47.212 00:04:47.212 00:04:47.212 Suite: components_suite 00:04:47.780 Test: vtophys_malloc_test ...passed 00:04:47.780 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.780 EAL: Restoring previous memory policy: 4 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.780 EAL: Trying to obtain current memory policy. 00:04:47.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.780 EAL: Restoring previous memory policy: 4 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.780 EAL: Trying to obtain current memory policy. 00:04:47.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.780 EAL: Restoring previous memory policy: 4 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.780 EAL: Trying to obtain current memory policy. 00:04:47.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.780 EAL: Restoring previous memory policy: 4 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.780 EAL: Trying to obtain current memory policy. 00:04:47.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.780 EAL: Restoring previous memory policy: 4 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.780 EAL: request: mp_malloc_sync 00:04:47.780 EAL: No shared files mode enabled, IPC is disabled 00:04:47.780 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.039 EAL: Trying to obtain current memory policy. 00:04:48.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.039 EAL: Restoring previous memory policy: 4 00:04:48.039 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.039 EAL: request: mp_malloc_sync 00:04:48.039 EAL: No shared files mode enabled, IPC is disabled 00:04:48.039 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.039 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.039 EAL: request: mp_malloc_sync 00:04:48.039 EAL: No shared files mode enabled, IPC is disabled 00:04:48.039 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.297 EAL: Trying to obtain current memory policy. 00:04:48.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.297 EAL: Restoring previous memory policy: 4 00:04:48.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.298 EAL: request: mp_malloc_sync 00:04:48.298 EAL: No shared files mode enabled, IPC is disabled 00:04:48.298 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.557 EAL: request: mp_malloc_sync 00:04:48.557 EAL: No shared files mode enabled, IPC is disabled 00:04:48.557 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.816 EAL: Trying to obtain current memory policy. 00:04:48.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.816 EAL: Restoring previous memory policy: 4 00:04:48.816 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.816 EAL: request: mp_malloc_sync 00:04:48.816 EAL: No shared files mode enabled, IPC is disabled 00:04:48.816 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.384 EAL: request: mp_malloc_sync 00:04:49.384 EAL: No shared files mode enabled, IPC is disabled 00:04:49.384 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.950 EAL: Trying to obtain current memory policy. 00:04:49.950 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.950 EAL: Restoring previous memory policy: 4 00:04:49.950 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.951 EAL: request: mp_malloc_sync 00:04:49.951 EAL: No shared files mode enabled, IPC is disabled 00:04:49.951 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.144 EAL: request: mp_malloc_sync 00:04:51.144 EAL: No shared files mode enabled, IPC is disabled 00:04:51.144 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.080 EAL: Trying to obtain current memory policy. 00:04:52.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.339 EAL: Restoring previous memory policy: 4 00:04:52.339 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.339 EAL: request: mp_malloc_sync 00:04:52.339 EAL: No shared files mode enabled, IPC is disabled 00:04:52.339 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.503 EAL: request: mp_malloc_sync 00:04:54.503 EAL: No shared files mode enabled, IPC is disabled 00:04:54.503 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.410 passed 00:04:56.410 00:04:56.410 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.410 suites 1 1 n/a 0 0 00:04:56.410 tests 2 2 2 0 0 00:04:56.410 asserts 5740 5740 5740 0 n/a 00:04:56.410 00:04:56.410 Elapsed time = 9.032 seconds 00:04:56.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.411 EAL: request: mp_malloc_sync 00:04:56.411 EAL: No shared files mode enabled, IPC is disabled 00:04:56.411 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.411 EAL: No shared files mode enabled, IPC is disabled 00:04:56.411 EAL: No shared files mode enabled, IPC is disabled 00:04:56.411 EAL: No shared files mode enabled, IPC is disabled 00:04:56.411 00:04:56.411 real 0m9.356s 00:04:56.411 user 0m8.297s 00:04:56.411 sys 0m0.887s 00:04:56.411 10:27:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.411 10:27:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.411 ************************************ 00:04:56.411 END TEST env_vtophys 00:04:56.411 ************************************ 00:04:56.411 10:27:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.411 10:27:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.411 10:27:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.411 10:27:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.411 ************************************ 00:04:56.411 START TEST env_pci 00:04:56.411 ************************************ 00:04:56.411 10:27:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.411 00:04:56.411 00:04:56.411 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.411 http://cunit.sourceforge.net/ 00:04:56.411 00:04:56.411 00:04:56.411 Suite: pci 00:04:56.411 Test: pci_hook ...[2024-11-20 10:27:59.833125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56851 has claimed it 00:04:56.411 passed 00:04:56.411 00:04:56.411 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.411 suites 1 1 n/a 0 0 00:04:56.411 tests 1 1 1 0 0 00:04:56.411 asserts 25 25 25 0 n/a 00:04:56.411 00:04:56.411 Elapsed time = 0.009 seconds 00:04:56.411 EAL: Cannot find device (10000:00:01.0) 00:04:56.411 EAL: Failed to attach device on primary process 00:04:56.672 00:04:56.672 real 0m0.109s 00:04:56.672 user 0m0.054s 00:04:56.672 sys 0m0.054s 00:04:56.672 10:27:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.672 10:27:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.672 ************************************ 00:04:56.672 END TEST env_pci 00:04:56.672 ************************************ 00:04:56.672 10:27:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.672 10:27:59 env -- env/env.sh@15 -- # uname 00:04:56.672 10:27:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.672 10:27:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.672 10:27:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.672 10:27:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:56.672 10:27:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.672 10:27:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.672 ************************************ 00:04:56.672 START TEST env_dpdk_post_init 00:04:56.672 ************************************ 00:04:56.672 10:27:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.672 EAL: Detected CPU lcores: 10 00:04:56.672 EAL: Detected NUMA nodes: 1 00:04:56.672 EAL: Detected shared linkage of DPDK 00:04:56.672 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.672 EAL: Selected IOVA mode 'PA' 00:04:56.932 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.932 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.932 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.932 Starting DPDK initialization... 00:04:56.932 Starting SPDK post initialization... 00:04:56.932 SPDK NVMe probe 00:04:56.932 Attaching to 0000:00:10.0 00:04:56.932 Attaching to 0000:00:11.0 00:04:56.932 Attached to 0000:00:10.0 00:04:56.932 Attached to 0000:00:11.0 00:04:56.932 Cleaning up... 00:04:56.932 00:04:56.932 real 0m0.287s 00:04:56.932 user 0m0.100s 00:04:56.932 sys 0m0.088s 00:04:56.932 10:28:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.932 10:28:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.932 ************************************ 00:04:56.932 END TEST env_dpdk_post_init 00:04:56.932 ************************************ 00:04:56.932 10:28:00 env -- env/env.sh@26 -- # uname 00:04:56.932 10:28:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.932 10:28:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.932 10:28:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.932 10:28:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.932 10:28:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.932 ************************************ 00:04:56.932 START TEST env_mem_callbacks 00:04:56.932 ************************************ 00:04:56.932 10:28:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.932 EAL: Detected CPU lcores: 10 00:04:56.932 EAL: Detected NUMA nodes: 1 00:04:56.932 EAL: Detected shared linkage of DPDK 00:04:56.932 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.932 EAL: Selected IOVA mode 'PA' 00:04:57.191 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.191 00:04:57.191 00:04:57.191 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.191 http://cunit.sourceforge.net/ 00:04:57.191 00:04:57.191 00:04:57.191 Suite: memory 00:04:57.191 Test: test ... 00:04:57.191 register 0x200000200000 2097152 00:04:57.191 malloc 3145728 00:04:57.191 register 0x200000400000 4194304 00:04:57.191 buf 0x2000004fffc0 len 3145728 PASSED 00:04:57.191 malloc 64 00:04:57.191 buf 0x2000004ffec0 len 64 PASSED 00:04:57.191 malloc 4194304 00:04:57.191 register 0x200000800000 6291456 00:04:57.191 buf 0x2000009fffc0 len 4194304 PASSED 00:04:57.191 free 0x2000004fffc0 3145728 00:04:57.191 free 0x2000004ffec0 64 00:04:57.191 unregister 0x200000400000 4194304 PASSED 00:04:57.191 free 0x2000009fffc0 4194304 00:04:57.191 unregister 0x200000800000 6291456 PASSED 00:04:57.191 malloc 8388608 00:04:57.191 register 0x200000400000 10485760 00:04:57.191 buf 0x2000005fffc0 len 8388608 PASSED 00:04:57.191 free 0x2000005fffc0 8388608 00:04:57.191 unregister 0x200000400000 10485760 PASSED 00:04:57.191 passed 00:04:57.191 00:04:57.191 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.191 suites 1 1 n/a 0 0 00:04:57.191 tests 1 1 1 0 0 00:04:57.191 asserts 15 15 15 0 n/a 00:04:57.191 00:04:57.191 Elapsed time = 0.087 seconds 00:04:57.191 00:04:57.191 real 0m0.295s 00:04:57.191 user 0m0.113s 00:04:57.191 sys 0m0.080s 00:04:57.191 10:28:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.191 10:28:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:57.191 ************************************ 00:04:57.191 END TEST env_mem_callbacks 00:04:57.191 ************************************ 00:04:57.450 00:04:57.450 real 0m10.958s 00:04:57.450 user 0m9.090s 00:04:57.450 sys 0m1.501s 00:04:57.450 10:28:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.450 10:28:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.450 ************************************ 00:04:57.450 END TEST env 00:04:57.450 ************************************ 00:04:57.450 10:28:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.450 10:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.450 10:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.450 10:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.450 ************************************ 00:04:57.450 START TEST rpc 00:04:57.450 ************************************ 00:04:57.450 10:28:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.450 * Looking for test storage... 00:04:57.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.451 10:28:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.451 10:28:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.451 10:28:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.709 10:28:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.709 10:28:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.709 10:28:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.709 10:28:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.709 10:28:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.709 10:28:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.709 10:28:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.709 10:28:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.709 10:28:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.709 10:28:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.709 10:28:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.709 10:28:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.709 10:28:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.709 10:28:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.709 10:28:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.710 10:28:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.710 --rc genhtml_branch_coverage=1 00:04:57.710 --rc genhtml_function_coverage=1 00:04:57.710 --rc genhtml_legend=1 00:04:57.710 --rc geninfo_all_blocks=1 00:04:57.710 --rc geninfo_unexecuted_blocks=1 00:04:57.710 00:04:57.710 ' 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.710 --rc genhtml_branch_coverage=1 00:04:57.710 --rc genhtml_function_coverage=1 00:04:57.710 --rc genhtml_legend=1 00:04:57.710 --rc geninfo_all_blocks=1 00:04:57.710 --rc geninfo_unexecuted_blocks=1 00:04:57.710 00:04:57.710 ' 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.710 --rc genhtml_branch_coverage=1 00:04:57.710 --rc genhtml_function_coverage=1 00:04:57.710 --rc genhtml_legend=1 00:04:57.710 --rc geninfo_all_blocks=1 00:04:57.710 --rc geninfo_unexecuted_blocks=1 00:04:57.710 00:04:57.710 ' 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.710 --rc genhtml_branch_coverage=1 00:04:57.710 --rc genhtml_function_coverage=1 00:04:57.710 --rc genhtml_legend=1 00:04:57.710 --rc geninfo_all_blocks=1 00:04:57.710 --rc geninfo_unexecuted_blocks=1 00:04:57.710 00:04:57.710 ' 00:04:57.710 10:28:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56978 00:04:57.710 10:28:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.710 10:28:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56978 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 56978 ']' 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.710 10:28:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.710 10:28:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:57.710 [2024-11-20 10:28:01.067175] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:57.710 [2024-11-20 10:28:01.067316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56978 ] 00:04:57.969 [2024-11-20 10:28:01.249867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.969 [2024-11-20 10:28:01.380169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.969 [2024-11-20 10:28:01.380242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56978' to capture a snapshot of events at runtime. 00:04:57.969 [2024-11-20 10:28:01.380253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.969 [2024-11-20 10:28:01.380265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.969 [2024-11-20 10:28:01.380273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56978 for offline analysis/debug. 00:04:57.969 [2024-11-20 10:28:01.381837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.357 10:28:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.357 10:28:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.357 10:28:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:59.357 10:28:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:59.357 10:28:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.357 10:28:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.357 10:28:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.357 10:28:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.357 10:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.357 ************************************ 00:04:59.357 START TEST rpc_integrity 00:04:59.357 ************************************ 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:59.357 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.357 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.358 { 00:04:59.358 "name": "Malloc0", 00:04:59.358 "aliases": [ 00:04:59.358 "e94fbaee-4b6f-43bc-9fa4-c4b517d5e767" 00:04:59.358 ], 00:04:59.358 "product_name": "Malloc disk", 00:04:59.358 "block_size": 512, 00:04:59.358 "num_blocks": 16384, 00:04:59.358 "uuid": "e94fbaee-4b6f-43bc-9fa4-c4b517d5e767", 00:04:59.358 "assigned_rate_limits": { 00:04:59.358 "rw_ios_per_sec": 0, 00:04:59.358 "rw_mbytes_per_sec": 0, 00:04:59.358 "r_mbytes_per_sec": 0, 00:04:59.358 "w_mbytes_per_sec": 0 00:04:59.358 }, 00:04:59.358 "claimed": false, 00:04:59.358 "zoned": false, 00:04:59.358 "supported_io_types": { 00:04:59.358 "read": true, 00:04:59.358 "write": true, 00:04:59.358 "unmap": true, 00:04:59.358 "flush": true, 00:04:59.358 "reset": true, 00:04:59.358 "nvme_admin": false, 00:04:59.358 "nvme_io": false, 00:04:59.358 "nvme_io_md": false, 00:04:59.358 "write_zeroes": true, 00:04:59.358 "zcopy": true, 00:04:59.358 "get_zone_info": false, 00:04:59.358 "zone_management": false, 00:04:59.358 "zone_append": false, 00:04:59.358 "compare": false, 00:04:59.358 "compare_and_write": false, 00:04:59.358 "abort": true, 00:04:59.358 "seek_hole": false, 00:04:59.358 "seek_data": false, 00:04:59.358 "copy": true, 00:04:59.358 "nvme_iov_md": false 00:04:59.358 }, 00:04:59.358 "memory_domains": [ 00:04:59.358 { 00:04:59.358 "dma_device_id": "system", 00:04:59.358 "dma_device_type": 1 00:04:59.358 }, 00:04:59.358 { 00:04:59.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.358 "dma_device_type": 2 00:04:59.358 } 00:04:59.358 ], 00:04:59.358 "driver_specific": {} 00:04:59.358 } 00:04:59.358 ]' 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 [2024-11-20 10:28:02.564714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:59.358 [2024-11-20 10:28:02.564810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.358 [2024-11-20 10:28:02.564838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:59.358 [2024-11-20 10:28:02.564855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.358 [2024-11-20 10:28:02.567583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.358 [2024-11-20 10:28:02.567643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.358 Passthru0 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.358 { 00:04:59.358 "name": "Malloc0", 00:04:59.358 "aliases": [ 00:04:59.358 "e94fbaee-4b6f-43bc-9fa4-c4b517d5e767" 00:04:59.358 ], 00:04:59.358 "product_name": "Malloc disk", 00:04:59.358 "block_size": 512, 00:04:59.358 "num_blocks": 16384, 00:04:59.358 "uuid": "e94fbaee-4b6f-43bc-9fa4-c4b517d5e767", 00:04:59.358 "assigned_rate_limits": { 00:04:59.358 "rw_ios_per_sec": 0, 00:04:59.358 "rw_mbytes_per_sec": 0, 00:04:59.358 "r_mbytes_per_sec": 0, 00:04:59.358 "w_mbytes_per_sec": 0 00:04:59.358 }, 00:04:59.358 "claimed": true, 00:04:59.358 "claim_type": "exclusive_write", 00:04:59.358 "zoned": false, 00:04:59.358 "supported_io_types": { 00:04:59.358 "read": true, 00:04:59.358 "write": true, 00:04:59.358 "unmap": true, 00:04:59.358 "flush": true, 00:04:59.358 "reset": true, 00:04:59.358 "nvme_admin": false, 00:04:59.358 "nvme_io": false, 00:04:59.358 "nvme_io_md": false, 00:04:59.358 "write_zeroes": true, 00:04:59.358 "zcopy": true, 00:04:59.358 "get_zone_info": false, 00:04:59.358 "zone_management": false, 00:04:59.358 "zone_append": false, 00:04:59.358 "compare": false, 00:04:59.358 "compare_and_write": false, 00:04:59.358 "abort": true, 00:04:59.358 "seek_hole": false, 00:04:59.358 "seek_data": false, 00:04:59.358 "copy": true, 00:04:59.358 "nvme_iov_md": false 00:04:59.358 }, 00:04:59.358 "memory_domains": [ 00:04:59.358 { 00:04:59.358 "dma_device_id": "system", 00:04:59.358 "dma_device_type": 1 00:04:59.358 }, 00:04:59.358 { 00:04:59.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.358 "dma_device_type": 2 00:04:59.358 } 00:04:59.358 ], 00:04:59.358 "driver_specific": {} 00:04:59.358 }, 00:04:59.358 { 00:04:59.358 "name": "Passthru0", 00:04:59.358 "aliases": [ 00:04:59.358 "32e4eeed-282b-521a-945a-13d70de48b0a" 00:04:59.358 ], 00:04:59.358 "product_name": "passthru", 00:04:59.358 "block_size": 512, 00:04:59.358 "num_blocks": 16384, 00:04:59.358 "uuid": "32e4eeed-282b-521a-945a-13d70de48b0a", 00:04:59.358 "assigned_rate_limits": { 00:04:59.358 "rw_ios_per_sec": 0, 00:04:59.358 "rw_mbytes_per_sec": 0, 00:04:59.358 "r_mbytes_per_sec": 0, 00:04:59.358 "w_mbytes_per_sec": 0 00:04:59.358 }, 00:04:59.358 "claimed": false, 00:04:59.358 "zoned": false, 00:04:59.358 "supported_io_types": { 00:04:59.358 "read": true, 00:04:59.358 "write": true, 00:04:59.358 "unmap": true, 00:04:59.358 "flush": true, 00:04:59.358 "reset": true, 00:04:59.358 "nvme_admin": false, 00:04:59.358 "nvme_io": false, 00:04:59.358 "nvme_io_md": false, 00:04:59.358 "write_zeroes": true, 00:04:59.358 "zcopy": true, 00:04:59.358 "get_zone_info": false, 00:04:59.358 "zone_management": false, 00:04:59.358 "zone_append": false, 00:04:59.358 "compare": false, 00:04:59.358 "compare_and_write": false, 00:04:59.358 "abort": true, 00:04:59.358 "seek_hole": false, 00:04:59.358 "seek_data": false, 00:04:59.358 "copy": true, 00:04:59.358 "nvme_iov_md": false 00:04:59.358 }, 00:04:59.358 "memory_domains": [ 00:04:59.358 { 00:04:59.358 "dma_device_id": "system", 00:04:59.358 "dma_device_type": 1 00:04:59.358 }, 00:04:59.358 { 00:04:59.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.358 "dma_device_type": 2 00:04:59.358 } 00:04:59.358 ], 00:04:59.358 "driver_specific": { 00:04:59.358 "passthru": { 00:04:59.358 "name": "Passthru0", 00:04:59.358 "base_bdev_name": "Malloc0" 00:04:59.358 } 00:04:59.358 } 00:04:59.358 } 00:04:59.358 ]' 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.358 10:28:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.358 00:04:59.358 real 0m0.339s 00:04:59.358 user 0m0.179s 00:04:59.358 sys 0m0.053s 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 ************************************ 00:04:59.358 END TEST rpc_integrity 00:04:59.358 ************************************ 00:04:59.358 10:28:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:59.358 10:28:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.358 10:28:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.358 10:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 ************************************ 00:04:59.358 START TEST rpc_plugins 00:04:59.358 ************************************ 00:04:59.358 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:59.358 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:59.358 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.358 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.358 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:59.358 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:59.358 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.358 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:59.618 { 00:04:59.618 "name": "Malloc1", 00:04:59.618 "aliases": [ 00:04:59.618 "9712ad48-f42d-4fba-ae0f-90dda1011014" 00:04:59.618 ], 00:04:59.618 "product_name": "Malloc disk", 00:04:59.618 "block_size": 4096, 00:04:59.618 "num_blocks": 256, 00:04:59.618 "uuid": "9712ad48-f42d-4fba-ae0f-90dda1011014", 00:04:59.618 "assigned_rate_limits": { 00:04:59.618 "rw_ios_per_sec": 0, 00:04:59.618 "rw_mbytes_per_sec": 0, 00:04:59.618 "r_mbytes_per_sec": 0, 00:04:59.618 "w_mbytes_per_sec": 0 00:04:59.618 }, 00:04:59.618 "claimed": false, 00:04:59.618 "zoned": false, 00:04:59.618 "supported_io_types": { 00:04:59.618 "read": true, 00:04:59.618 "write": true, 00:04:59.618 "unmap": true, 00:04:59.618 "flush": true, 00:04:59.618 "reset": true, 00:04:59.618 "nvme_admin": false, 00:04:59.618 "nvme_io": false, 00:04:59.618 "nvme_io_md": false, 00:04:59.618 "write_zeroes": true, 00:04:59.618 "zcopy": true, 00:04:59.618 "get_zone_info": false, 00:04:59.618 "zone_management": false, 00:04:59.618 "zone_append": false, 00:04:59.618 "compare": false, 00:04:59.618 "compare_and_write": false, 00:04:59.618 "abort": true, 00:04:59.618 "seek_hole": false, 00:04:59.618 "seek_data": false, 00:04:59.618 "copy": true, 00:04:59.618 "nvme_iov_md": false 00:04:59.618 }, 00:04:59.618 "memory_domains": [ 00:04:59.618 { 00:04:59.618 "dma_device_id": "system", 00:04:59.618 "dma_device_type": 1 00:04:59.618 }, 00:04:59.618 { 00:04:59.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.618 "dma_device_type": 2 00:04:59.618 } 00:04:59.618 ], 00:04:59.618 "driver_specific": {} 00:04:59.618 } 00:04:59.618 ]' 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:59.618 10:28:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.618 00:04:59.618 real 0m0.181s 00:04:59.618 user 0m0.097s 00:04:59.618 sys 0m0.029s 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.618 10:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 ************************************ 00:04:59.618 END TEST rpc_plugins 00:04:59.618 ************************************ 00:04:59.618 10:28:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.618 10:28:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.618 10:28:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.618 10:28:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 ************************************ 00:04:59.618 START TEST rpc_trace_cmd_test 00:04:59.618 ************************************ 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.618 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.618 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56978", 00:04:59.618 "tpoint_group_mask": "0x8", 00:04:59.618 "iscsi_conn": { 00:04:59.618 "mask": "0x2", 00:04:59.618 "tpoint_mask": "0x0" 00:04:59.618 }, 00:04:59.618 "scsi": { 00:04:59.618 "mask": "0x4", 00:04:59.618 "tpoint_mask": "0x0" 00:04:59.618 }, 00:04:59.618 "bdev": { 00:04:59.618 "mask": "0x8", 00:04:59.618 "tpoint_mask": "0xffffffffffffffff" 00:04:59.618 }, 00:04:59.618 "nvmf_rdma": { 00:04:59.618 "mask": "0x10", 00:04:59.618 "tpoint_mask": "0x0" 00:04:59.618 }, 00:04:59.618 "nvmf_tcp": { 00:04:59.618 "mask": "0x20", 00:04:59.618 "tpoint_mask": "0x0" 00:04:59.618 }, 00:04:59.619 "ftl": { 00:04:59.619 "mask": "0x40", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "blobfs": { 00:04:59.619 "mask": "0x80", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "dsa": { 00:04:59.619 "mask": "0x200", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "thread": { 00:04:59.619 "mask": "0x400", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "nvme_pcie": { 00:04:59.619 "mask": "0x800", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "iaa": { 00:04:59.619 "mask": "0x1000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "nvme_tcp": { 00:04:59.619 "mask": "0x2000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "bdev_nvme": { 00:04:59.619 "mask": "0x4000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "sock": { 00:04:59.619 "mask": "0x8000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "blob": { 00:04:59.619 "mask": "0x10000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "bdev_raid": { 00:04:59.619 "mask": "0x20000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 }, 00:04:59.619 "scheduler": { 00:04:59.619 "mask": "0x40000", 00:04:59.619 "tpoint_mask": "0x0" 00:04:59.619 } 00:04:59.619 }' 00:04:59.619 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.878 00:04:59.878 real 0m0.241s 00:04:59.878 user 0m0.199s 00:04:59.878 sys 0m0.034s 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.878 10:28:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.878 ************************************ 00:04:59.878 END TEST rpc_trace_cmd_test 00:04:59.878 ************************************ 00:04:59.878 10:28:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.878 10:28:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.878 10:28:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.878 10:28:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.878 10:28:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.878 10:28:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.878 ************************************ 00:04:59.878 START TEST rpc_daemon_integrity 00:04:59.878 ************************************ 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.878 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.138 { 00:05:00.138 "name": "Malloc2", 00:05:00.138 "aliases": [ 00:05:00.138 "730e5dd1-f3e9-402f-b74b-bb28d4a82b9d" 00:05:00.138 ], 00:05:00.138 "product_name": "Malloc disk", 00:05:00.138 "block_size": 512, 00:05:00.138 "num_blocks": 16384, 00:05:00.138 "uuid": "730e5dd1-f3e9-402f-b74b-bb28d4a82b9d", 00:05:00.138 "assigned_rate_limits": { 00:05:00.138 "rw_ios_per_sec": 0, 00:05:00.138 "rw_mbytes_per_sec": 0, 00:05:00.138 "r_mbytes_per_sec": 0, 00:05:00.138 "w_mbytes_per_sec": 0 00:05:00.138 }, 00:05:00.138 "claimed": false, 00:05:00.138 "zoned": false, 00:05:00.138 "supported_io_types": { 00:05:00.138 "read": true, 00:05:00.138 "write": true, 00:05:00.138 "unmap": true, 00:05:00.138 "flush": true, 00:05:00.138 "reset": true, 00:05:00.138 "nvme_admin": false, 00:05:00.138 "nvme_io": false, 00:05:00.138 "nvme_io_md": false, 00:05:00.138 "write_zeroes": true, 00:05:00.138 "zcopy": true, 00:05:00.138 "get_zone_info": false, 00:05:00.138 "zone_management": false, 00:05:00.138 "zone_append": false, 00:05:00.138 "compare": false, 00:05:00.138 "compare_and_write": false, 00:05:00.138 "abort": true, 00:05:00.138 "seek_hole": false, 00:05:00.138 "seek_data": false, 00:05:00.138 "copy": true, 00:05:00.138 "nvme_iov_md": false 00:05:00.138 }, 00:05:00.138 "memory_domains": [ 00:05:00.138 { 00:05:00.138 "dma_device_id": "system", 00:05:00.138 "dma_device_type": 1 00:05:00.138 }, 00:05:00.138 { 00:05:00.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.138 "dma_device_type": 2 00:05:00.138 } 00:05:00.138 ], 00:05:00.138 "driver_specific": {} 00:05:00.138 } 00:05:00.138 ]' 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.138 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.138 [2024-11-20 10:28:03.496391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:00.138 [2024-11-20 10:28:03.496490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.139 [2024-11-20 10:28:03.496516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:00.139 [2024-11-20 10:28:03.496530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.139 [2024-11-20 10:28:03.499283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.139 [2024-11-20 10:28:03.499365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.139 Passthru0 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.139 { 00:05:00.139 "name": "Malloc2", 00:05:00.139 "aliases": [ 00:05:00.139 "730e5dd1-f3e9-402f-b74b-bb28d4a82b9d" 00:05:00.139 ], 00:05:00.139 "product_name": "Malloc disk", 00:05:00.139 "block_size": 512, 00:05:00.139 "num_blocks": 16384, 00:05:00.139 "uuid": "730e5dd1-f3e9-402f-b74b-bb28d4a82b9d", 00:05:00.139 "assigned_rate_limits": { 00:05:00.139 "rw_ios_per_sec": 0, 00:05:00.139 "rw_mbytes_per_sec": 0, 00:05:00.139 "r_mbytes_per_sec": 0, 00:05:00.139 "w_mbytes_per_sec": 0 00:05:00.139 }, 00:05:00.139 "claimed": true, 00:05:00.139 "claim_type": "exclusive_write", 00:05:00.139 "zoned": false, 00:05:00.139 "supported_io_types": { 00:05:00.139 "read": true, 00:05:00.139 "write": true, 00:05:00.139 "unmap": true, 00:05:00.139 "flush": true, 00:05:00.139 "reset": true, 00:05:00.139 "nvme_admin": false, 00:05:00.139 "nvme_io": false, 00:05:00.139 "nvme_io_md": false, 00:05:00.139 "write_zeroes": true, 00:05:00.139 "zcopy": true, 00:05:00.139 "get_zone_info": false, 00:05:00.139 "zone_management": false, 00:05:00.139 "zone_append": false, 00:05:00.139 "compare": false, 00:05:00.139 "compare_and_write": false, 00:05:00.139 "abort": true, 00:05:00.139 "seek_hole": false, 00:05:00.139 "seek_data": false, 00:05:00.139 "copy": true, 00:05:00.139 "nvme_iov_md": false 00:05:00.139 }, 00:05:00.139 "memory_domains": [ 00:05:00.139 { 00:05:00.139 "dma_device_id": "system", 00:05:00.139 "dma_device_type": 1 00:05:00.139 }, 00:05:00.139 { 00:05:00.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.139 "dma_device_type": 2 00:05:00.139 } 00:05:00.139 ], 00:05:00.139 "driver_specific": {} 00:05:00.139 }, 00:05:00.139 { 00:05:00.139 "name": "Passthru0", 00:05:00.139 "aliases": [ 00:05:00.139 "746b27a7-90f2-58ee-9681-92b4541c6a16" 00:05:00.139 ], 00:05:00.139 "product_name": "passthru", 00:05:00.139 "block_size": 512, 00:05:00.139 "num_blocks": 16384, 00:05:00.139 "uuid": "746b27a7-90f2-58ee-9681-92b4541c6a16", 00:05:00.139 "assigned_rate_limits": { 00:05:00.139 "rw_ios_per_sec": 0, 00:05:00.139 "rw_mbytes_per_sec": 0, 00:05:00.139 "r_mbytes_per_sec": 0, 00:05:00.139 "w_mbytes_per_sec": 0 00:05:00.139 }, 00:05:00.139 "claimed": false, 00:05:00.139 "zoned": false, 00:05:00.139 "supported_io_types": { 00:05:00.139 "read": true, 00:05:00.139 "write": true, 00:05:00.139 "unmap": true, 00:05:00.139 "flush": true, 00:05:00.139 "reset": true, 00:05:00.139 "nvme_admin": false, 00:05:00.139 "nvme_io": false, 00:05:00.139 "nvme_io_md": false, 00:05:00.139 "write_zeroes": true, 00:05:00.139 "zcopy": true, 00:05:00.139 "get_zone_info": false, 00:05:00.139 "zone_management": false, 00:05:00.139 "zone_append": false, 00:05:00.139 "compare": false, 00:05:00.139 "compare_and_write": false, 00:05:00.139 "abort": true, 00:05:00.139 "seek_hole": false, 00:05:00.139 "seek_data": false, 00:05:00.139 "copy": true, 00:05:00.139 "nvme_iov_md": false 00:05:00.139 }, 00:05:00.139 "memory_domains": [ 00:05:00.139 { 00:05:00.139 "dma_device_id": "system", 00:05:00.139 "dma_device_type": 1 00:05:00.139 }, 00:05:00.139 { 00:05:00.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.139 "dma_device_type": 2 00:05:00.139 } 00:05:00.139 ], 00:05:00.139 "driver_specific": { 00:05:00.139 "passthru": { 00:05:00.139 "name": "Passthru0", 00:05:00.139 "base_bdev_name": "Malloc2" 00:05:00.139 } 00:05:00.139 } 00:05:00.139 } 00:05:00.139 ]' 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.139 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.398 00:05:00.398 real 0m0.371s 00:05:00.398 user 0m0.209s 00:05:00.398 sys 0m0.048s 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.398 10:28:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.398 ************************************ 00:05:00.398 END TEST rpc_daemon_integrity 00:05:00.398 ************************************ 00:05:00.398 10:28:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:00.398 10:28:03 rpc -- rpc/rpc.sh@84 -- # killprocess 56978 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 56978 ']' 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@958 -- # kill -0 56978 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56978 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.398 killing process with pid 56978 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56978' 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@973 -- # kill 56978 00:05:00.398 10:28:03 rpc -- common/autotest_common.sh@978 -- # wait 56978 00:05:03.677 00:05:03.677 real 0m5.861s 00:05:03.677 user 0m6.488s 00:05:03.677 sys 0m0.919s 00:05:03.677 10:28:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.677 10:28:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.677 ************************************ 00:05:03.677 END TEST rpc 00:05:03.677 ************************************ 00:05:03.677 10:28:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:03.677 10:28:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.677 10:28:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.677 10:28:06 -- common/autotest_common.sh@10 -- # set +x 00:05:03.677 ************************************ 00:05:03.677 START TEST skip_rpc 00:05:03.677 ************************************ 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:03.677 * Looking for test storage... 00:05:03.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.677 10:28:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.677 --rc genhtml_branch_coverage=1 00:05:03.677 --rc genhtml_function_coverage=1 00:05:03.677 --rc genhtml_legend=1 00:05:03.677 --rc geninfo_all_blocks=1 00:05:03.677 --rc geninfo_unexecuted_blocks=1 00:05:03.677 00:05:03.677 ' 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.677 --rc genhtml_branch_coverage=1 00:05:03.677 --rc genhtml_function_coverage=1 00:05:03.677 --rc genhtml_legend=1 00:05:03.677 --rc geninfo_all_blocks=1 00:05:03.677 --rc geninfo_unexecuted_blocks=1 00:05:03.677 00:05:03.677 ' 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.677 --rc genhtml_branch_coverage=1 00:05:03.677 --rc genhtml_function_coverage=1 00:05:03.677 --rc genhtml_legend=1 00:05:03.677 --rc geninfo_all_blocks=1 00:05:03.677 --rc geninfo_unexecuted_blocks=1 00:05:03.677 00:05:03.677 ' 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.677 --rc genhtml_branch_coverage=1 00:05:03.677 --rc genhtml_function_coverage=1 00:05:03.677 --rc genhtml_legend=1 00:05:03.677 --rc geninfo_all_blocks=1 00:05:03.677 --rc geninfo_unexecuted_blocks=1 00:05:03.677 00:05:03.677 ' 00:05:03.677 10:28:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.677 10:28:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:03.677 10:28:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.677 10:28:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.677 ************************************ 00:05:03.677 START TEST skip_rpc 00:05:03.677 ************************************ 00:05:03.677 10:28:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:03.677 10:28:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57218 00:05:03.677 10:28:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:03.677 10:28:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.677 10:28:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:03.677 [2024-11-20 10:28:06.979974] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:03.677 [2024-11-20 10:28:06.980629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57218 ] 00:05:03.977 [2024-11-20 10:28:07.160434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.977 [2024-11-20 10:28:07.303317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57218 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57218 ']' 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57218 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57218 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.329 killing process with pid 57218 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57218' 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57218 00:05:09.329 10:28:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57218 00:05:11.237 00:05:11.237 real 0m7.654s 00:05:11.237 user 0m7.162s 00:05:11.237 sys 0m0.405s 00:05:11.237 10:28:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.237 10:28:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.237 ************************************ 00:05:11.237 END TEST skip_rpc 00:05:11.237 ************************************ 00:05:11.237 10:28:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:11.237 10:28:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.237 10:28:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.237 10:28:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.237 ************************************ 00:05:11.237 START TEST skip_rpc_with_json 00:05:11.237 ************************************ 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57322 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57322 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57322 ']' 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.237 10:28:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.237 [2024-11-20 10:28:14.693854] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:11.237 [2024-11-20 10:28:14.693976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57322 ] 00:05:11.497 [2024-11-20 10:28:14.855648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.755 [2024-11-20 10:28:14.981817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.690 [2024-11-20 10:28:15.885063] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.690 request: 00:05:12.690 { 00:05:12.690 "trtype": "tcp", 00:05:12.690 "method": "nvmf_get_transports", 00:05:12.690 "req_id": 1 00:05:12.690 } 00:05:12.690 Got JSON-RPC error response 00:05:12.690 response: 00:05:12.690 { 00:05:12.690 "code": -19, 00:05:12.690 "message": "No such device" 00:05:12.690 } 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.690 [2024-11-20 10:28:15.897163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.690 10:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.690 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.690 10:28:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.690 { 00:05:12.690 "subsystems": [ 00:05:12.690 { 00:05:12.690 "subsystem": "fsdev", 00:05:12.690 "config": [ 00:05:12.690 { 00:05:12.690 "method": "fsdev_set_opts", 00:05:12.690 "params": { 00:05:12.690 "fsdev_io_pool_size": 65535, 00:05:12.690 "fsdev_io_cache_size": 256 00:05:12.690 } 00:05:12.690 } 00:05:12.690 ] 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "subsystem": "keyring", 00:05:12.690 "config": [] 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "subsystem": "iobuf", 00:05:12.690 "config": [ 00:05:12.690 { 00:05:12.690 "method": "iobuf_set_options", 00:05:12.690 "params": { 00:05:12.690 "small_pool_count": 8192, 00:05:12.690 "large_pool_count": 1024, 00:05:12.690 "small_bufsize": 8192, 00:05:12.690 "large_bufsize": 135168, 00:05:12.690 "enable_numa": false 00:05:12.690 } 00:05:12.690 } 00:05:12.690 ] 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "subsystem": "sock", 00:05:12.690 "config": [ 00:05:12.690 { 00:05:12.690 "method": "sock_set_default_impl", 00:05:12.690 "params": { 00:05:12.690 "impl_name": "posix" 00:05:12.690 } 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "method": "sock_impl_set_options", 00:05:12.690 "params": { 00:05:12.690 "impl_name": "ssl", 00:05:12.690 "recv_buf_size": 4096, 00:05:12.690 "send_buf_size": 4096, 00:05:12.690 "enable_recv_pipe": true, 00:05:12.690 "enable_quickack": false, 00:05:12.690 "enable_placement_id": 0, 00:05:12.690 "enable_zerocopy_send_server": true, 00:05:12.690 "enable_zerocopy_send_client": false, 00:05:12.690 "zerocopy_threshold": 0, 00:05:12.690 "tls_version": 0, 00:05:12.690 "enable_ktls": false 00:05:12.690 } 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "method": "sock_impl_set_options", 00:05:12.690 "params": { 00:05:12.690 "impl_name": "posix", 00:05:12.690 "recv_buf_size": 2097152, 00:05:12.690 "send_buf_size": 2097152, 00:05:12.690 "enable_recv_pipe": true, 00:05:12.690 "enable_quickack": false, 00:05:12.690 "enable_placement_id": 0, 00:05:12.690 "enable_zerocopy_send_server": true, 00:05:12.690 "enable_zerocopy_send_client": false, 00:05:12.690 "zerocopy_threshold": 0, 00:05:12.690 "tls_version": 0, 00:05:12.690 "enable_ktls": false 00:05:12.690 } 00:05:12.690 } 00:05:12.690 ] 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "subsystem": "vmd", 00:05:12.690 "config": [] 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "subsystem": "accel", 00:05:12.690 "config": [ 00:05:12.690 { 00:05:12.690 "method": "accel_set_options", 00:05:12.690 "params": { 00:05:12.690 "small_cache_size": 128, 00:05:12.690 "large_cache_size": 16, 00:05:12.690 "task_count": 2048, 00:05:12.690 "sequence_count": 2048, 00:05:12.690 "buf_count": 2048 00:05:12.690 } 00:05:12.690 } 00:05:12.690 ] 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "subsystem": "bdev", 00:05:12.690 "config": [ 00:05:12.690 { 00:05:12.690 "method": "bdev_set_options", 00:05:12.690 "params": { 00:05:12.690 "bdev_io_pool_size": 65535, 00:05:12.690 "bdev_io_cache_size": 256, 00:05:12.690 "bdev_auto_examine": true, 00:05:12.690 "iobuf_small_cache_size": 128, 00:05:12.690 "iobuf_large_cache_size": 16 00:05:12.690 } 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "method": "bdev_raid_set_options", 00:05:12.690 "params": { 00:05:12.690 "process_window_size_kb": 1024, 00:05:12.690 "process_max_bandwidth_mb_sec": 0 00:05:12.690 } 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "method": "bdev_iscsi_set_options", 00:05:12.690 "params": { 00:05:12.690 "timeout_sec": 30 00:05:12.690 } 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "method": "bdev_nvme_set_options", 00:05:12.690 "params": { 00:05:12.690 "action_on_timeout": "none", 00:05:12.690 "timeout_us": 0, 00:05:12.690 "timeout_admin_us": 0, 00:05:12.690 "keep_alive_timeout_ms": 10000, 00:05:12.690 "arbitration_burst": 0, 00:05:12.690 "low_priority_weight": 0, 00:05:12.690 "medium_priority_weight": 0, 00:05:12.690 "high_priority_weight": 0, 00:05:12.690 "nvme_adminq_poll_period_us": 10000, 00:05:12.690 "nvme_ioq_poll_period_us": 0, 00:05:12.690 "io_queue_requests": 0, 00:05:12.690 "delay_cmd_submit": true, 00:05:12.690 "transport_retry_count": 4, 00:05:12.690 "bdev_retry_count": 3, 00:05:12.690 "transport_ack_timeout": 0, 00:05:12.690 "ctrlr_loss_timeout_sec": 0, 00:05:12.691 "reconnect_delay_sec": 0, 00:05:12.691 "fast_io_fail_timeout_sec": 0, 00:05:12.691 "disable_auto_failback": false, 00:05:12.691 "generate_uuids": false, 00:05:12.691 "transport_tos": 0, 00:05:12.691 "nvme_error_stat": false, 00:05:12.691 "rdma_srq_size": 0, 00:05:12.691 "io_path_stat": false, 00:05:12.691 "allow_accel_sequence": false, 00:05:12.691 "rdma_max_cq_size": 0, 00:05:12.691 "rdma_cm_event_timeout_ms": 0, 00:05:12.691 "dhchap_digests": [ 00:05:12.691 "sha256", 00:05:12.691 "sha384", 00:05:12.691 "sha512" 00:05:12.691 ], 00:05:12.691 "dhchap_dhgroups": [ 00:05:12.691 "null", 00:05:12.691 "ffdhe2048", 00:05:12.691 "ffdhe3072", 00:05:12.691 "ffdhe4096", 00:05:12.691 "ffdhe6144", 00:05:12.691 "ffdhe8192" 00:05:12.691 ] 00:05:12.691 } 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "method": "bdev_nvme_set_hotplug", 00:05:12.691 "params": { 00:05:12.691 "period_us": 100000, 00:05:12.691 "enable": false 00:05:12.691 } 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "method": "bdev_wait_for_examine" 00:05:12.691 } 00:05:12.691 ] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "scsi", 00:05:12.691 "config": null 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "scheduler", 00:05:12.691 "config": [ 00:05:12.691 { 00:05:12.691 "method": "framework_set_scheduler", 00:05:12.691 "params": { 00:05:12.691 "name": "static" 00:05:12.691 } 00:05:12.691 } 00:05:12.691 ] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "vhost_scsi", 00:05:12.691 "config": [] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "vhost_blk", 00:05:12.691 "config": [] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "ublk", 00:05:12.691 "config": [] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "nbd", 00:05:12.691 "config": [] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "nvmf", 00:05:12.691 "config": [ 00:05:12.691 { 00:05:12.691 "method": "nvmf_set_config", 00:05:12.691 "params": { 00:05:12.691 "discovery_filter": "match_any", 00:05:12.691 "admin_cmd_passthru": { 00:05:12.691 "identify_ctrlr": false 00:05:12.691 }, 00:05:12.691 "dhchap_digests": [ 00:05:12.691 "sha256", 00:05:12.691 "sha384", 00:05:12.691 "sha512" 00:05:12.691 ], 00:05:12.691 "dhchap_dhgroups": [ 00:05:12.691 "null", 00:05:12.691 "ffdhe2048", 00:05:12.691 "ffdhe3072", 00:05:12.691 "ffdhe4096", 00:05:12.691 "ffdhe6144", 00:05:12.691 "ffdhe8192" 00:05:12.691 ] 00:05:12.691 } 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "method": "nvmf_set_max_subsystems", 00:05:12.691 "params": { 00:05:12.691 "max_subsystems": 1024 00:05:12.691 } 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "method": "nvmf_set_crdt", 00:05:12.691 "params": { 00:05:12.691 "crdt1": 0, 00:05:12.691 "crdt2": 0, 00:05:12.691 "crdt3": 0 00:05:12.691 } 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "method": "nvmf_create_transport", 00:05:12.691 "params": { 00:05:12.691 "trtype": "TCP", 00:05:12.691 "max_queue_depth": 128, 00:05:12.691 "max_io_qpairs_per_ctrlr": 127, 00:05:12.691 "in_capsule_data_size": 4096, 00:05:12.691 "max_io_size": 131072, 00:05:12.691 "io_unit_size": 131072, 00:05:12.691 "max_aq_depth": 128, 00:05:12.691 "num_shared_buffers": 511, 00:05:12.691 "buf_cache_size": 4294967295, 00:05:12.691 "dif_insert_or_strip": false, 00:05:12.691 "zcopy": false, 00:05:12.691 "c2h_success": true, 00:05:12.691 "sock_priority": 0, 00:05:12.691 "abort_timeout_sec": 1, 00:05:12.691 "ack_timeout": 0, 00:05:12.691 "data_wr_pool_size": 0 00:05:12.691 } 00:05:12.691 } 00:05:12.691 ] 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "subsystem": "iscsi", 00:05:12.691 "config": [ 00:05:12.691 { 00:05:12.691 "method": "iscsi_set_options", 00:05:12.691 "params": { 00:05:12.691 "node_base": "iqn.2016-06.io.spdk", 00:05:12.691 "max_sessions": 128, 00:05:12.691 "max_connections_per_session": 2, 00:05:12.691 "max_queue_depth": 64, 00:05:12.691 "default_time2wait": 2, 00:05:12.691 "default_time2retain": 20, 00:05:12.691 "first_burst_length": 8192, 00:05:12.691 "immediate_data": true, 00:05:12.691 "allow_duplicated_isid": false, 00:05:12.691 "error_recovery_level": 0, 00:05:12.691 "nop_timeout": 60, 00:05:12.691 "nop_in_interval": 30, 00:05:12.691 "disable_chap": false, 00:05:12.691 "require_chap": false, 00:05:12.691 "mutual_chap": false, 00:05:12.691 "chap_group": 0, 00:05:12.691 "max_large_datain_per_connection": 64, 00:05:12.691 "max_r2t_per_connection": 4, 00:05:12.691 "pdu_pool_size": 36864, 00:05:12.691 "immediate_data_pool_size": 16384, 00:05:12.691 "data_out_pool_size": 2048 00:05:12.691 } 00:05:12.691 } 00:05:12.691 ] 00:05:12.691 } 00:05:12.691 ] 00:05:12.691 } 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57322 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57322 ']' 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57322 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57322 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.691 killing process with pid 57322 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57322' 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57322 00:05:12.691 10:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57322 00:05:15.283 10:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57378 00:05:15.283 10:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.283 10:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57378 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57378 ']' 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57378 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57378 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.601 killing process with pid 57378 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57378' 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57378 00:05:20.601 10:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57378 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.218 00:05:23.218 real 0m11.745s 00:05:23.218 user 0m11.227s 00:05:23.218 sys 0m0.835s 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.218 ************************************ 00:05:23.218 END TEST skip_rpc_with_json 00:05:23.218 ************************************ 00:05:23.218 10:28:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:23.218 10:28:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.218 10:28:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.218 10:28:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.218 ************************************ 00:05:23.218 START TEST skip_rpc_with_delay 00:05:23.218 ************************************ 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.218 [2024-11-20 10:28:26.503333] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.218 00:05:23.218 real 0m0.171s 00:05:23.218 user 0m0.096s 00:05:23.218 sys 0m0.074s 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.218 10:28:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.218 ************************************ 00:05:23.218 END TEST skip_rpc_with_delay 00:05:23.218 ************************************ 00:05:23.218 10:28:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.218 10:28:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.218 10:28:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.218 10:28:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.218 10:28:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.218 10:28:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.218 ************************************ 00:05:23.218 START TEST exit_on_failed_rpc_init 00:05:23.218 ************************************ 00:05:23.218 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:23.218 10:28:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57517 00:05:23.218 10:28:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57517 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57517 ']' 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.219 10:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.478 [2024-11-20 10:28:26.747663] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:23.478 [2024-11-20 10:28:26.747787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57517 ] 00:05:23.478 [2024-11-20 10:28:26.924491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.738 [2024-11-20 10:28:27.043854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.673 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:24.674 10:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.674 [2024-11-20 10:28:28.044916] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:24.674 [2024-11-20 10:28:28.045031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57535 ] 00:05:24.932 [2024-11-20 10:28:28.220718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.932 [2024-11-20 10:28:28.344170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.932 [2024-11-20 10:28:28.344276] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.932 [2024-11-20 10:28:28.344290] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.932 [2024-11-20 10:28:28.344303] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57517 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57517 ']' 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57517 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57517 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.191 killing process with pid 57517 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57517' 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57517 00:05:25.191 10:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57517 00:05:27.805 00:05:27.805 real 0m4.494s 00:05:27.805 user 0m4.888s 00:05:27.805 sys 0m0.564s 00:05:27.805 10:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.805 10:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.805 ************************************ 00:05:27.805 END TEST exit_on_failed_rpc_init 00:05:27.805 ************************************ 00:05:27.805 10:28:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.805 00:05:27.805 real 0m24.549s 00:05:27.805 user 0m23.575s 00:05:27.805 sys 0m2.181s 00:05:27.805 ************************************ 00:05:27.805 END TEST skip_rpc 00:05:27.805 10:28:31 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.805 10:28:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.805 ************************************ 00:05:27.805 10:28:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:27.805 10:28:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.805 10:28:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.805 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:27.805 ************************************ 00:05:27.805 START TEST rpc_client 00:05:27.805 ************************************ 00:05:27.805 10:28:31 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.065 * Looking for test storage... 00:05:28.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.065 10:28:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.065 --rc genhtml_branch_coverage=1 00:05:28.065 --rc genhtml_function_coverage=1 00:05:28.065 --rc genhtml_legend=1 00:05:28.065 --rc geninfo_all_blocks=1 00:05:28.065 --rc geninfo_unexecuted_blocks=1 00:05:28.065 00:05:28.065 ' 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.065 --rc genhtml_branch_coverage=1 00:05:28.065 --rc genhtml_function_coverage=1 00:05:28.065 --rc genhtml_legend=1 00:05:28.065 --rc geninfo_all_blocks=1 00:05:28.065 --rc geninfo_unexecuted_blocks=1 00:05:28.065 00:05:28.065 ' 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.065 --rc genhtml_branch_coverage=1 00:05:28.065 --rc genhtml_function_coverage=1 00:05:28.065 --rc genhtml_legend=1 00:05:28.065 --rc geninfo_all_blocks=1 00:05:28.065 --rc geninfo_unexecuted_blocks=1 00:05:28.065 00:05:28.065 ' 00:05:28.065 10:28:31 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.065 --rc genhtml_branch_coverage=1 00:05:28.065 --rc genhtml_function_coverage=1 00:05:28.065 --rc genhtml_legend=1 00:05:28.065 --rc geninfo_all_blocks=1 00:05:28.065 --rc geninfo_unexecuted_blocks=1 00:05:28.065 00:05:28.065 ' 00:05:28.065 10:28:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:28.065 OK 00:05:28.325 10:28:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.325 00:05:28.325 real 0m0.281s 00:05:28.325 user 0m0.159s 00:05:28.325 sys 0m0.138s 00:05:28.325 ************************************ 00:05:28.325 END TEST rpc_client 00:05:28.325 ************************************ 00:05:28.325 10:28:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.325 10:28:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.325 10:28:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.325 10:28:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.325 10:28:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.325 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:28.325 ************************************ 00:05:28.325 START TEST json_config 00:05:28.325 ************************************ 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.325 10:28:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.325 10:28:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.325 10:28:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.325 10:28:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.325 10:28:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.325 10:28:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:28.325 10:28:31 json_config -- scripts/common.sh@345 -- # : 1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.325 10:28:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.325 10:28:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@353 -- # local d=1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.325 10:28:31 json_config -- scripts/common.sh@355 -- # echo 1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.325 10:28:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@353 -- # local d=2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.325 10:28:31 json_config -- scripts/common.sh@355 -- # echo 2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.325 10:28:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.325 10:28:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.325 10:28:31 json_config -- scripts/common.sh@368 -- # return 0 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.325 --rc genhtml_branch_coverage=1 00:05:28.325 --rc genhtml_function_coverage=1 00:05:28.325 --rc genhtml_legend=1 00:05:28.325 --rc geninfo_all_blocks=1 00:05:28.325 --rc geninfo_unexecuted_blocks=1 00:05:28.325 00:05:28.325 ' 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.325 --rc genhtml_branch_coverage=1 00:05:28.325 --rc genhtml_function_coverage=1 00:05:28.325 --rc genhtml_legend=1 00:05:28.325 --rc geninfo_all_blocks=1 00:05:28.325 --rc geninfo_unexecuted_blocks=1 00:05:28.325 00:05:28.325 ' 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.325 --rc genhtml_branch_coverage=1 00:05:28.325 --rc genhtml_function_coverage=1 00:05:28.325 --rc genhtml_legend=1 00:05:28.325 --rc geninfo_all_blocks=1 00:05:28.325 --rc geninfo_unexecuted_blocks=1 00:05:28.325 00:05:28.325 ' 00:05:28.325 10:28:31 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.325 --rc genhtml_branch_coverage=1 00:05:28.325 --rc genhtml_function_coverage=1 00:05:28.325 --rc genhtml_legend=1 00:05:28.325 --rc geninfo_all_blocks=1 00:05:28.325 --rc geninfo_unexecuted_blocks=1 00:05:28.325 00:05:28.325 ' 00:05:28.325 10:28:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.325 10:28:31 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:be5e0f63-c4b2-4a21-a7e7-d50ddb8f0bf8 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=be5e0f63-c4b2-4a21-a7e7-d50ddb8f0bf8 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.326 10:28:31 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.326 10:28:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.586 10:28:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.586 10:28:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.586 10:28:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.586 10:28:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.586 10:28:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.586 10:28:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.586 10:28:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.586 10:28:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:05:28.586 10:28:31 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:28.586 10:28:31 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:28.586 10:28:31 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@50 -- # : 0 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:28.586 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:28.586 10:28:31 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:28.586 WARNING: No tests are enabled so not running JSON configuration tests 00:05:28.586 10:28:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:28.586 00:05:28.586 real 0m0.214s 00:05:28.586 user 0m0.141s 00:05:28.586 sys 0m0.077s 00:05:28.586 10:28:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.586 10:28:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.586 ************************************ 00:05:28.586 END TEST json_config 00:05:28.587 ************************************ 00:05:28.587 10:28:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.587 10:28:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.587 10:28:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.587 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:28.587 ************************************ 00:05:28.587 START TEST json_config_extra_key 00:05:28.587 ************************************ 00:05:28.587 10:28:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.587 10:28:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.587 10:28:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.587 10:28:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.587 10:28:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.587 10:28:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:28.587 10:28:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.587 10:28:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.587 --rc genhtml_branch_coverage=1 00:05:28.587 --rc genhtml_function_coverage=1 00:05:28.587 --rc genhtml_legend=1 00:05:28.587 --rc geninfo_all_blocks=1 00:05:28.587 --rc geninfo_unexecuted_blocks=1 00:05:28.587 00:05:28.587 ' 00:05:28.587 10:28:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.587 --rc genhtml_branch_coverage=1 00:05:28.587 --rc genhtml_function_coverage=1 00:05:28.587 --rc genhtml_legend=1 00:05:28.587 --rc geninfo_all_blocks=1 00:05:28.587 --rc geninfo_unexecuted_blocks=1 00:05:28.587 00:05:28.587 ' 00:05:28.587 10:28:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.587 --rc genhtml_branch_coverage=1 00:05:28.587 --rc genhtml_function_coverage=1 00:05:28.587 --rc genhtml_legend=1 00:05:28.587 --rc geninfo_all_blocks=1 00:05:28.587 --rc geninfo_unexecuted_blocks=1 00:05:28.587 00:05:28.587 ' 00:05:28.587 10:28:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.587 --rc genhtml_branch_coverage=1 00:05:28.587 --rc genhtml_function_coverage=1 00:05:28.587 --rc genhtml_legend=1 00:05:28.587 --rc geninfo_all_blocks=1 00:05:28.587 --rc geninfo_unexecuted_blocks=1 00:05:28.587 00:05:28.587 ' 00:05:28.587 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:be5e0f63-c4b2-4a21-a7e7-d50ddb8f0bf8 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=be5e0f63-c4b2-4a21-a7e7-d50ddb8f0bf8 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:28.587 10:28:32 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:28.847 10:28:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.847 10:28:32 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.847 10:28:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.847 10:28:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.847 10:28:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.847 10:28:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.847 10:28:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.847 10:28:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.848 10:28:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.848 10:28:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:28.848 10:28:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:28.848 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:28.848 10:28:32 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:28.848 INFO: launching applications... 00:05:28.848 10:28:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57745 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.848 Waiting for target to run... 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57745 /var/tmp/spdk_tgt.sock 00:05:28.848 10:28:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.848 10:28:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57745 ']' 00:05:28.848 10:28:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.848 10:28:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.848 10:28:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.848 10:28:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.848 10:28:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.848 [2024-11-20 10:28:32.218646] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:28.848 [2024-11-20 10:28:32.218862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57745 ] 00:05:29.418 [2024-11-20 10:28:32.609720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.418 [2024-11-20 10:28:32.726875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.402 00:05:30.402 INFO: shutting down applications... 00:05:30.402 10:28:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.402 10:28:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:30.402 10:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:30.402 10:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57745 ]] 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57745 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:30.402 10:28:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.660 10:28:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.660 10:28:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.660 10:28:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:30.660 10:28:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.228 10:28:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.228 10:28:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.228 10:28:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:31.228 10:28:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.794 10:28:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.794 10:28:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.794 10:28:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:31.794 10:28:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.053 10:28:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.053 10:28:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.053 10:28:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:32.053 10:28:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.622 10:28:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.622 10:28:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.622 10:28:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:32.622 10:28:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:05:33.190 SPDK target shutdown done 00:05:33.190 Success 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.190 10:28:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.190 10:28:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:33.190 00:05:33.190 real 0m4.657s 00:05:33.190 user 0m4.349s 00:05:33.190 sys 0m0.592s 00:05:33.190 10:28:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.190 10:28:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.190 ************************************ 00:05:33.190 END TEST json_config_extra_key 00:05:33.190 ************************************ 00:05:33.190 10:28:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.190 10:28:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.190 10:28:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.190 10:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.190 ************************************ 00:05:33.190 START TEST alias_rpc 00:05:33.190 ************************************ 00:05:33.190 10:28:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.450 * Looking for test storage... 00:05:33.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.450 10:28:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.450 --rc genhtml_branch_coverage=1 00:05:33.450 --rc genhtml_function_coverage=1 00:05:33.450 --rc genhtml_legend=1 00:05:33.450 --rc geninfo_all_blocks=1 00:05:33.450 --rc geninfo_unexecuted_blocks=1 00:05:33.450 00:05:33.450 ' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.450 --rc genhtml_branch_coverage=1 00:05:33.450 --rc genhtml_function_coverage=1 00:05:33.450 --rc genhtml_legend=1 00:05:33.450 --rc geninfo_all_blocks=1 00:05:33.450 --rc geninfo_unexecuted_blocks=1 00:05:33.450 00:05:33.450 ' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.450 --rc genhtml_branch_coverage=1 00:05:33.450 --rc genhtml_function_coverage=1 00:05:33.450 --rc genhtml_legend=1 00:05:33.450 --rc geninfo_all_blocks=1 00:05:33.450 --rc geninfo_unexecuted_blocks=1 00:05:33.450 00:05:33.450 ' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.450 --rc genhtml_branch_coverage=1 00:05:33.450 --rc genhtml_function_coverage=1 00:05:33.450 --rc genhtml_legend=1 00:05:33.450 --rc geninfo_all_blocks=1 00:05:33.450 --rc geninfo_unexecuted_blocks=1 00:05:33.450 00:05:33.450 ' 00:05:33.450 10:28:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.450 10:28:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57861 00:05:33.450 10:28:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.450 10:28:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57861 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57861 ']' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.450 10:28:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.710 [2024-11-20 10:28:36.963415] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:33.710 [2024-11-20 10:28:36.963751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57861 ] 00:05:33.710 [2024-11-20 10:28:37.139651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.970 [2024-11-20 10:28:37.259761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.913 10:28:38 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.913 10:28:38 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.913 10:28:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:35.173 10:28:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57861 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57861 ']' 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57861 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57861 00:05:35.173 killing process with pid 57861 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57861' 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@973 -- # kill 57861 00:05:35.173 10:28:38 alias_rpc -- common/autotest_common.sh@978 -- # wait 57861 00:05:37.788 ************************************ 00:05:37.788 END TEST alias_rpc 00:05:37.788 ************************************ 00:05:37.788 00:05:37.788 real 0m4.462s 00:05:37.788 user 0m4.473s 00:05:37.788 sys 0m0.577s 00:05:37.788 10:28:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.788 10:28:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 10:28:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:37.788 10:28:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:37.788 10:28:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.788 10:28:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.788 10:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 ************************************ 00:05:37.788 START TEST spdkcli_tcp 00:05:37.788 ************************************ 00:05:37.788 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:37.788 * Looking for test storage... 00:05:37.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:37.788 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.788 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.788 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.048 10:28:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.048 --rc genhtml_branch_coverage=1 00:05:38.048 --rc genhtml_function_coverage=1 00:05:38.048 --rc genhtml_legend=1 00:05:38.048 --rc geninfo_all_blocks=1 00:05:38.048 --rc geninfo_unexecuted_blocks=1 00:05:38.048 00:05:38.048 ' 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.048 --rc genhtml_branch_coverage=1 00:05:38.048 --rc genhtml_function_coverage=1 00:05:38.048 --rc genhtml_legend=1 00:05:38.048 --rc geninfo_all_blocks=1 00:05:38.048 --rc geninfo_unexecuted_blocks=1 00:05:38.048 00:05:38.048 ' 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.048 --rc genhtml_branch_coverage=1 00:05:38.048 --rc genhtml_function_coverage=1 00:05:38.048 --rc genhtml_legend=1 00:05:38.048 --rc geninfo_all_blocks=1 00:05:38.048 --rc geninfo_unexecuted_blocks=1 00:05:38.048 00:05:38.048 ' 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.048 --rc genhtml_branch_coverage=1 00:05:38.048 --rc genhtml_function_coverage=1 00:05:38.048 --rc genhtml_legend=1 00:05:38.048 --rc geninfo_all_blocks=1 00:05:38.048 --rc geninfo_unexecuted_blocks=1 00:05:38.048 00:05:38.048 ' 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57969 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57969 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57969 ']' 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.048 10:28:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.048 10:28:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:38.048 [2024-11-20 10:28:41.442530] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:38.048 [2024-11-20 10:28:41.442664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57969 ] 00:05:38.308 [2024-11-20 10:28:41.618519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.308 [2024-11-20 10:28:41.739815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.308 [2024-11-20 10:28:41.739853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.246 10:28:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.246 10:28:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:39.246 10:28:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57992 00:05:39.246 10:28:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:39.246 10:28:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:39.507 [ 00:05:39.507 "bdev_malloc_delete", 00:05:39.507 "bdev_malloc_create", 00:05:39.507 "bdev_null_resize", 00:05:39.507 "bdev_null_delete", 00:05:39.507 "bdev_null_create", 00:05:39.507 "bdev_nvme_cuse_unregister", 00:05:39.507 "bdev_nvme_cuse_register", 00:05:39.507 "bdev_opal_new_user", 00:05:39.507 "bdev_opal_set_lock_state", 00:05:39.507 "bdev_opal_delete", 00:05:39.507 "bdev_opal_get_info", 00:05:39.507 "bdev_opal_create", 00:05:39.507 "bdev_nvme_opal_revert", 00:05:39.507 "bdev_nvme_opal_init", 00:05:39.507 "bdev_nvme_send_cmd", 00:05:39.507 "bdev_nvme_set_keys", 00:05:39.507 "bdev_nvme_get_path_iostat", 00:05:39.507 "bdev_nvme_get_mdns_discovery_info", 00:05:39.507 "bdev_nvme_stop_mdns_discovery", 00:05:39.507 "bdev_nvme_start_mdns_discovery", 00:05:39.507 "bdev_nvme_set_multipath_policy", 00:05:39.507 "bdev_nvme_set_preferred_path", 00:05:39.507 "bdev_nvme_get_io_paths", 00:05:39.507 "bdev_nvme_remove_error_injection", 00:05:39.507 "bdev_nvme_add_error_injection", 00:05:39.507 "bdev_nvme_get_discovery_info", 00:05:39.507 "bdev_nvme_stop_discovery", 00:05:39.507 "bdev_nvme_start_discovery", 00:05:39.507 "bdev_nvme_get_controller_health_info", 00:05:39.507 "bdev_nvme_disable_controller", 00:05:39.507 "bdev_nvme_enable_controller", 00:05:39.507 "bdev_nvme_reset_controller", 00:05:39.507 "bdev_nvme_get_transport_statistics", 00:05:39.507 "bdev_nvme_apply_firmware", 00:05:39.507 "bdev_nvme_detach_controller", 00:05:39.507 "bdev_nvme_get_controllers", 00:05:39.507 "bdev_nvme_attach_controller", 00:05:39.507 "bdev_nvme_set_hotplug", 00:05:39.507 "bdev_nvme_set_options", 00:05:39.507 "bdev_passthru_delete", 00:05:39.507 "bdev_passthru_create", 00:05:39.507 "bdev_lvol_set_parent_bdev", 00:05:39.507 "bdev_lvol_set_parent", 00:05:39.507 "bdev_lvol_check_shallow_copy", 00:05:39.507 "bdev_lvol_start_shallow_copy", 00:05:39.507 "bdev_lvol_grow_lvstore", 00:05:39.507 "bdev_lvol_get_lvols", 00:05:39.507 "bdev_lvol_get_lvstores", 00:05:39.507 "bdev_lvol_delete", 00:05:39.507 "bdev_lvol_set_read_only", 00:05:39.507 "bdev_lvol_resize", 00:05:39.507 "bdev_lvol_decouple_parent", 00:05:39.507 "bdev_lvol_inflate", 00:05:39.507 "bdev_lvol_rename", 00:05:39.507 "bdev_lvol_clone_bdev", 00:05:39.507 "bdev_lvol_clone", 00:05:39.507 "bdev_lvol_snapshot", 00:05:39.507 "bdev_lvol_create", 00:05:39.507 "bdev_lvol_delete_lvstore", 00:05:39.507 "bdev_lvol_rename_lvstore", 00:05:39.507 "bdev_lvol_create_lvstore", 00:05:39.507 "bdev_raid_set_options", 00:05:39.507 "bdev_raid_remove_base_bdev", 00:05:39.507 "bdev_raid_add_base_bdev", 00:05:39.507 "bdev_raid_delete", 00:05:39.507 "bdev_raid_create", 00:05:39.507 "bdev_raid_get_bdevs", 00:05:39.507 "bdev_error_inject_error", 00:05:39.507 "bdev_error_delete", 00:05:39.507 "bdev_error_create", 00:05:39.507 "bdev_split_delete", 00:05:39.507 "bdev_split_create", 00:05:39.507 "bdev_delay_delete", 00:05:39.507 "bdev_delay_create", 00:05:39.507 "bdev_delay_update_latency", 00:05:39.507 "bdev_zone_block_delete", 00:05:39.507 "bdev_zone_block_create", 00:05:39.507 "blobfs_create", 00:05:39.507 "blobfs_detect", 00:05:39.507 "blobfs_set_cache_size", 00:05:39.507 "bdev_aio_delete", 00:05:39.507 "bdev_aio_rescan", 00:05:39.507 "bdev_aio_create", 00:05:39.507 "bdev_ftl_set_property", 00:05:39.507 "bdev_ftl_get_properties", 00:05:39.507 "bdev_ftl_get_stats", 00:05:39.507 "bdev_ftl_unmap", 00:05:39.507 "bdev_ftl_unload", 00:05:39.507 "bdev_ftl_delete", 00:05:39.507 "bdev_ftl_load", 00:05:39.507 "bdev_ftl_create", 00:05:39.507 "bdev_virtio_attach_controller", 00:05:39.507 "bdev_virtio_scsi_get_devices", 00:05:39.507 "bdev_virtio_detach_controller", 00:05:39.507 "bdev_virtio_blk_set_hotplug", 00:05:39.507 "bdev_iscsi_delete", 00:05:39.507 "bdev_iscsi_create", 00:05:39.507 "bdev_iscsi_set_options", 00:05:39.507 "accel_error_inject_error", 00:05:39.507 "ioat_scan_accel_module", 00:05:39.507 "dsa_scan_accel_module", 00:05:39.507 "iaa_scan_accel_module", 00:05:39.507 "keyring_file_remove_key", 00:05:39.508 "keyring_file_add_key", 00:05:39.508 "keyring_linux_set_options", 00:05:39.508 "fsdev_aio_delete", 00:05:39.508 "fsdev_aio_create", 00:05:39.508 "iscsi_get_histogram", 00:05:39.508 "iscsi_enable_histogram", 00:05:39.508 "iscsi_set_options", 00:05:39.508 "iscsi_get_auth_groups", 00:05:39.508 "iscsi_auth_group_remove_secret", 00:05:39.508 "iscsi_auth_group_add_secret", 00:05:39.508 "iscsi_delete_auth_group", 00:05:39.508 "iscsi_create_auth_group", 00:05:39.508 "iscsi_set_discovery_auth", 00:05:39.508 "iscsi_get_options", 00:05:39.508 "iscsi_target_node_request_logout", 00:05:39.508 "iscsi_target_node_set_redirect", 00:05:39.508 "iscsi_target_node_set_auth", 00:05:39.508 "iscsi_target_node_add_lun", 00:05:39.508 "iscsi_get_stats", 00:05:39.508 "iscsi_get_connections", 00:05:39.508 "iscsi_portal_group_set_auth", 00:05:39.508 "iscsi_start_portal_group", 00:05:39.508 "iscsi_delete_portal_group", 00:05:39.508 "iscsi_create_portal_group", 00:05:39.508 "iscsi_get_portal_groups", 00:05:39.508 "iscsi_delete_target_node", 00:05:39.508 "iscsi_target_node_remove_pg_ig_maps", 00:05:39.508 "iscsi_target_node_add_pg_ig_maps", 00:05:39.508 "iscsi_create_target_node", 00:05:39.508 "iscsi_get_target_nodes", 00:05:39.508 "iscsi_delete_initiator_group", 00:05:39.508 "iscsi_initiator_group_remove_initiators", 00:05:39.508 "iscsi_initiator_group_add_initiators", 00:05:39.508 "iscsi_create_initiator_group", 00:05:39.508 "iscsi_get_initiator_groups", 00:05:39.508 "nvmf_set_crdt", 00:05:39.508 "nvmf_set_config", 00:05:39.508 "nvmf_set_max_subsystems", 00:05:39.508 "nvmf_stop_mdns_prr", 00:05:39.508 "nvmf_publish_mdns_prr", 00:05:39.508 "nvmf_subsystem_get_listeners", 00:05:39.508 "nvmf_subsystem_get_qpairs", 00:05:39.508 "nvmf_subsystem_get_controllers", 00:05:39.508 "nvmf_get_stats", 00:05:39.508 "nvmf_get_transports", 00:05:39.508 "nvmf_create_transport", 00:05:39.508 "nvmf_get_targets", 00:05:39.508 "nvmf_delete_target", 00:05:39.508 "nvmf_create_target", 00:05:39.508 "nvmf_subsystem_allow_any_host", 00:05:39.508 "nvmf_subsystem_set_keys", 00:05:39.508 "nvmf_subsystem_remove_host", 00:05:39.508 "nvmf_subsystem_add_host", 00:05:39.508 "nvmf_ns_remove_host", 00:05:39.508 "nvmf_ns_add_host", 00:05:39.508 "nvmf_subsystem_remove_ns", 00:05:39.508 "nvmf_subsystem_set_ns_ana_group", 00:05:39.508 "nvmf_subsystem_add_ns", 00:05:39.508 "nvmf_subsystem_listener_set_ana_state", 00:05:39.508 "nvmf_discovery_get_referrals", 00:05:39.508 "nvmf_discovery_remove_referral", 00:05:39.508 "nvmf_discovery_add_referral", 00:05:39.508 "nvmf_subsystem_remove_listener", 00:05:39.508 "nvmf_subsystem_add_listener", 00:05:39.508 "nvmf_delete_subsystem", 00:05:39.508 "nvmf_create_subsystem", 00:05:39.508 "nvmf_get_subsystems", 00:05:39.508 "env_dpdk_get_mem_stats", 00:05:39.508 "nbd_get_disks", 00:05:39.508 "nbd_stop_disk", 00:05:39.508 "nbd_start_disk", 00:05:39.508 "ublk_recover_disk", 00:05:39.508 "ublk_get_disks", 00:05:39.508 "ublk_stop_disk", 00:05:39.508 "ublk_start_disk", 00:05:39.508 "ublk_destroy_target", 00:05:39.508 "ublk_create_target", 00:05:39.508 "virtio_blk_create_transport", 00:05:39.508 "virtio_blk_get_transports", 00:05:39.508 "vhost_controller_set_coalescing", 00:05:39.508 "vhost_get_controllers", 00:05:39.508 "vhost_delete_controller", 00:05:39.508 "vhost_create_blk_controller", 00:05:39.508 "vhost_scsi_controller_remove_target", 00:05:39.508 "vhost_scsi_controller_add_target", 00:05:39.508 "vhost_start_scsi_controller", 00:05:39.508 "vhost_create_scsi_controller", 00:05:39.508 "thread_set_cpumask", 00:05:39.508 "scheduler_set_options", 00:05:39.508 "framework_get_governor", 00:05:39.508 "framework_get_scheduler", 00:05:39.508 "framework_set_scheduler", 00:05:39.508 "framework_get_reactors", 00:05:39.508 "thread_get_io_channels", 00:05:39.508 "thread_get_pollers", 00:05:39.508 "thread_get_stats", 00:05:39.508 "framework_monitor_context_switch", 00:05:39.508 "spdk_kill_instance", 00:05:39.508 "log_enable_timestamps", 00:05:39.508 "log_get_flags", 00:05:39.508 "log_clear_flag", 00:05:39.508 "log_set_flag", 00:05:39.508 "log_get_level", 00:05:39.508 "log_set_level", 00:05:39.508 "log_get_print_level", 00:05:39.508 "log_set_print_level", 00:05:39.508 "framework_enable_cpumask_locks", 00:05:39.508 "framework_disable_cpumask_locks", 00:05:39.508 "framework_wait_init", 00:05:39.508 "framework_start_init", 00:05:39.508 "scsi_get_devices", 00:05:39.508 "bdev_get_histogram", 00:05:39.508 "bdev_enable_histogram", 00:05:39.508 "bdev_set_qos_limit", 00:05:39.508 "bdev_set_qd_sampling_period", 00:05:39.508 "bdev_get_bdevs", 00:05:39.508 "bdev_reset_iostat", 00:05:39.508 "bdev_get_iostat", 00:05:39.508 "bdev_examine", 00:05:39.508 "bdev_wait_for_examine", 00:05:39.508 "bdev_set_options", 00:05:39.508 "accel_get_stats", 00:05:39.508 "accel_set_options", 00:05:39.508 "accel_set_driver", 00:05:39.508 "accel_crypto_key_destroy", 00:05:39.508 "accel_crypto_keys_get", 00:05:39.508 "accel_crypto_key_create", 00:05:39.508 "accel_assign_opc", 00:05:39.508 "accel_get_module_info", 00:05:39.508 "accel_get_opc_assignments", 00:05:39.508 "vmd_rescan", 00:05:39.508 "vmd_remove_device", 00:05:39.508 "vmd_enable", 00:05:39.508 "sock_get_default_impl", 00:05:39.508 "sock_set_default_impl", 00:05:39.508 "sock_impl_set_options", 00:05:39.508 "sock_impl_get_options", 00:05:39.508 "iobuf_get_stats", 00:05:39.508 "iobuf_set_options", 00:05:39.508 "keyring_get_keys", 00:05:39.508 "framework_get_pci_devices", 00:05:39.508 "framework_get_config", 00:05:39.508 "framework_get_subsystems", 00:05:39.508 "fsdev_set_opts", 00:05:39.508 "fsdev_get_opts", 00:05:39.508 "trace_get_info", 00:05:39.508 "trace_get_tpoint_group_mask", 00:05:39.508 "trace_disable_tpoint_group", 00:05:39.508 "trace_enable_tpoint_group", 00:05:39.508 "trace_clear_tpoint_mask", 00:05:39.508 "trace_set_tpoint_mask", 00:05:39.508 "notify_get_notifications", 00:05:39.508 "notify_get_types", 00:05:39.508 "spdk_get_version", 00:05:39.508 "rpc_get_methods" 00:05:39.508 ] 00:05:39.508 10:28:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.508 10:28:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:39.508 10:28:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57969 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57969 ']' 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57969 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.508 10:28:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57969 00:05:39.767 killing process with pid 57969 00:05:39.767 10:28:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.767 10:28:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.767 10:28:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57969' 00:05:39.767 10:28:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57969 00:05:39.767 10:28:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57969 00:05:42.302 ************************************ 00:05:42.302 END TEST spdkcli_tcp 00:05:42.302 ************************************ 00:05:42.302 00:05:42.302 real 0m4.518s 00:05:42.302 user 0m8.098s 00:05:42.302 sys 0m0.673s 00:05:42.302 10:28:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.302 10:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 10:28:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.302 10:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.302 10:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.302 10:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.303 ************************************ 00:05:42.303 START TEST dpdk_mem_utility 00:05:42.303 ************************************ 00:05:42.303 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.562 * Looking for test storage... 00:05:42.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:42.562 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.562 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.562 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.562 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.562 10:28:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:42.562 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.562 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.563 --rc genhtml_branch_coverage=1 00:05:42.563 --rc genhtml_function_coverage=1 00:05:42.563 --rc genhtml_legend=1 00:05:42.563 --rc geninfo_all_blocks=1 00:05:42.563 --rc geninfo_unexecuted_blocks=1 00:05:42.563 00:05:42.563 ' 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.563 --rc genhtml_branch_coverage=1 00:05:42.563 --rc genhtml_function_coverage=1 00:05:42.563 --rc genhtml_legend=1 00:05:42.563 --rc geninfo_all_blocks=1 00:05:42.563 --rc geninfo_unexecuted_blocks=1 00:05:42.563 00:05:42.563 ' 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.563 --rc genhtml_branch_coverage=1 00:05:42.563 --rc genhtml_function_coverage=1 00:05:42.563 --rc genhtml_legend=1 00:05:42.563 --rc geninfo_all_blocks=1 00:05:42.563 --rc geninfo_unexecuted_blocks=1 00:05:42.563 00:05:42.563 ' 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.563 --rc genhtml_branch_coverage=1 00:05:42.563 --rc genhtml_function_coverage=1 00:05:42.563 --rc genhtml_legend=1 00:05:42.563 --rc geninfo_all_blocks=1 00:05:42.563 --rc geninfo_unexecuted_blocks=1 00:05:42.563 00:05:42.563 ' 00:05:42.563 10:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.563 10:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.563 10:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58097 00:05:42.563 10:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58097 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58097 ']' 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.563 10:28:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.823 [2024-11-20 10:28:46.041521] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:42.823 [2024-11-20 10:28:46.041789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:05:42.823 [2024-11-20 10:28:46.225400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.082 [2024-11-20 10:28:46.356902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.032 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.032 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:44.032 10:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.032 10:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.032 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.032 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.032 { 00:05:44.032 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.032 } 00:05:44.032 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.032 10:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.032 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:44.032 1 heaps totaling size 816.000000 MiB 00:05:44.032 size: 816.000000 MiB heap id: 0 00:05:44.032 end heaps---------- 00:05:44.032 9 mempools totaling size 595.772034 MiB 00:05:44.032 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.032 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.032 size: 92.545471 MiB name: bdev_io_58097 00:05:44.032 size: 50.003479 MiB name: msgpool_58097 00:05:44.032 size: 36.509338 MiB name: fsdev_io_58097 00:05:44.032 size: 21.763794 MiB name: PDU_Pool 00:05:44.032 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.032 size: 4.133484 MiB name: evtpool_58097 00:05:44.032 size: 0.026123 MiB name: Session_Pool 00:05:44.032 end mempools------- 00:05:44.032 6 memzones totaling size 4.142822 MiB 00:05:44.032 size: 1.000366 MiB name: RG_ring_0_58097 00:05:44.032 size: 1.000366 MiB name: RG_ring_1_58097 00:05:44.032 size: 1.000366 MiB name: RG_ring_4_58097 00:05:44.032 size: 1.000366 MiB name: RG_ring_5_58097 00:05:44.032 size: 0.125366 MiB name: RG_ring_2_58097 00:05:44.032 size: 0.015991 MiB name: RG_ring_3_58097 00:05:44.032 end memzones------- 00:05:44.032 10:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.032 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:05:44.032 list of free elements. size: 16.790405 MiB 00:05:44.032 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:44.032 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:44.032 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:44.033 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:44.033 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:44.033 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:44.033 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:44.033 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:44.033 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:44.033 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:44.033 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:44.033 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:05:44.033 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:44.033 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:44.033 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:44.033 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:44.033 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:44.033 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:44.033 list of standard malloc elements. size: 199.288696 MiB 00:05:44.033 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:44.033 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:44.033 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:44.033 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:44.033 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:44.033 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:44.033 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:44.033 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:44.033 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:44.033 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:44.033 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:44.033 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:44.033 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:44.033 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:44.034 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:44.034 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:44.034 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:44.035 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:44.035 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:44.035 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:44.036 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:44.036 list of memzone associated elements. size: 599.920898 MiB 00:05:44.036 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:44.036 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.036 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:44.036 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.036 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:44.036 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58097_0 00:05:44.036 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:44.036 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58097_0 00:05:44.036 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:44.036 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58097_0 00:05:44.036 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:44.036 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.036 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:44.036 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.036 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:44.036 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58097_0 00:05:44.036 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:44.036 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58097 00:05:44.036 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:44.036 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58097 00:05:44.036 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:44.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.036 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:44.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.036 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:44.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.036 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:44.036 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.036 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:44.036 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58097 00:05:44.036 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:44.036 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58097 00:05:44.036 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:44.036 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58097 00:05:44.036 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:44.036 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58097 00:05:44.036 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:44.036 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58097 00:05:44.036 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:44.036 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58097 00:05:44.036 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:44.036 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.036 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:44.036 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.036 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:44.036 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.036 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:44.036 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58097 00:05:44.036 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:44.036 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58097 00:05:44.036 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:44.036 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.036 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:44.036 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.036 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:44.036 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58097 00:05:44.036 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:44.036 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.036 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:44.036 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58097 00:05:44.036 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:44.036 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58097 00:05:44.036 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:44.036 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58097 00:05:44.036 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:44.036 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.036 10:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.036 10:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58097 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58097 ']' 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58097 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58097 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58097' 00:05:44.036 killing process with pid 58097 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58097 00:05:44.036 10:28:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58097 00:05:46.665 00:05:46.665 real 0m4.308s 00:05:46.665 user 0m4.250s 00:05:46.665 sys 0m0.619s 00:05:46.665 10:28:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.665 10:28:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.665 ************************************ 00:05:46.665 END TEST dpdk_mem_utility 00:05:46.665 ************************************ 00:05:46.665 10:28:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.665 10:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.665 10:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.665 10:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.665 ************************************ 00:05:46.665 START TEST event 00:05:46.665 ************************************ 00:05:46.665 10:28:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.925 * Looking for test storage... 00:05:46.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.925 10:28:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.925 10:28:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.925 10:28:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.925 10:28:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.925 10:28:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.925 10:28:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.925 10:28:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.925 10:28:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.925 10:28:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.925 10:28:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.925 10:28:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.925 10:28:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:46.925 10:28:50 event -- scripts/common.sh@345 -- # : 1 00:05:46.925 10:28:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.925 10:28:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.925 10:28:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:46.925 10:28:50 event -- scripts/common.sh@353 -- # local d=1 00:05:46.925 10:28:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.925 10:28:50 event -- scripts/common.sh@355 -- # echo 1 00:05:46.925 10:28:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.925 10:28:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:46.925 10:28:50 event -- scripts/common.sh@353 -- # local d=2 00:05:46.925 10:28:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.925 10:28:50 event -- scripts/common.sh@355 -- # echo 2 00:05:46.925 10:28:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.925 10:28:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.925 10:28:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.925 10:28:50 event -- scripts/common.sh@368 -- # return 0 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.925 --rc genhtml_branch_coverage=1 00:05:46.925 --rc genhtml_function_coverage=1 00:05:46.925 --rc genhtml_legend=1 00:05:46.925 --rc geninfo_all_blocks=1 00:05:46.925 --rc geninfo_unexecuted_blocks=1 00:05:46.925 00:05:46.925 ' 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.925 --rc genhtml_branch_coverage=1 00:05:46.925 --rc genhtml_function_coverage=1 00:05:46.925 --rc genhtml_legend=1 00:05:46.925 --rc geninfo_all_blocks=1 00:05:46.925 --rc geninfo_unexecuted_blocks=1 00:05:46.925 00:05:46.925 ' 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.925 --rc genhtml_branch_coverage=1 00:05:46.925 --rc genhtml_function_coverage=1 00:05:46.925 --rc genhtml_legend=1 00:05:46.925 --rc geninfo_all_blocks=1 00:05:46.925 --rc geninfo_unexecuted_blocks=1 00:05:46.925 00:05:46.925 ' 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.925 --rc genhtml_branch_coverage=1 00:05:46.925 --rc genhtml_function_coverage=1 00:05:46.925 --rc genhtml_legend=1 00:05:46.925 --rc geninfo_all_blocks=1 00:05:46.925 --rc geninfo_unexecuted_blocks=1 00:05:46.925 00:05:46.925 ' 00:05:46.925 10:28:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:46.925 10:28:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.925 10:28:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:46.925 10:28:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.925 10:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.925 ************************************ 00:05:46.925 START TEST event_perf 00:05:46.925 ************************************ 00:05:46.925 10:28:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.925 Running I/O for 1 seconds...[2024-11-20 10:28:50.396355] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:46.925 [2024-11-20 10:28:50.397060] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:05:47.185 [2024-11-20 10:28:50.582617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.443 [2024-11-20 10:28:50.715277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.443 [2024-11-20 10:28:50.715508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.443 Running I/O for 1 seconds...[2024-11-20 10:28:50.715439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.443 [2024-11-20 10:28:50.715562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.819 00:05:48.819 lcore 0: 100476 00:05:48.819 lcore 1: 100476 00:05:48.819 lcore 2: 100477 00:05:48.819 lcore 3: 100473 00:05:48.819 done. 00:05:48.819 00:05:48.819 real 0m1.638s 00:05:48.819 user 0m4.358s 00:05:48.819 sys 0m0.152s 00:05:48.819 10:28:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.819 10:28:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.819 ************************************ 00:05:48.819 END TEST event_perf 00:05:48.819 ************************************ 00:05:48.819 10:28:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.819 10:28:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:48.819 10:28:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.819 10:28:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.819 ************************************ 00:05:48.819 START TEST event_reactor 00:05:48.819 ************************************ 00:05:48.819 10:28:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.819 [2024-11-20 10:28:52.103493] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:48.819 [2024-11-20 10:28:52.103690] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:05:48.819 [2024-11-20 10:28:52.282315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.107 [2024-11-20 10:28:52.404434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.495 test_start 00:05:50.495 oneshot 00:05:50.495 tick 100 00:05:50.495 tick 100 00:05:50.495 tick 250 00:05:50.495 tick 100 00:05:50.495 tick 100 00:05:50.495 tick 100 00:05:50.495 tick 250 00:05:50.495 tick 500 00:05:50.495 tick 100 00:05:50.495 tick 100 00:05:50.495 tick 250 00:05:50.495 tick 100 00:05:50.495 tick 100 00:05:50.495 test_end 00:05:50.495 00:05:50.495 real 0m1.600s 00:05:50.496 user 0m1.403s 00:05:50.496 sys 0m0.086s 00:05:50.496 10:28:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.496 10:28:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.496 ************************************ 00:05:50.496 END TEST event_reactor 00:05:50.496 ************************************ 00:05:50.496 10:28:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.496 10:28:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.496 10:28:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.496 10:28:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.496 ************************************ 00:05:50.496 START TEST event_reactor_perf 00:05:50.496 ************************************ 00:05:50.496 10:28:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.496 [2024-11-20 10:28:53.765966] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:50.496 [2024-11-20 10:28:53.766208] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58285 ] 00:05:50.496 [2024-11-20 10:28:53.943668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.755 [2024-11-20 10:28:54.068414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.137 test_start 00:05:52.137 test_end 00:05:52.137 Performance: 331071 events per second 00:05:52.137 00:05:52.137 real 0m1.605s 00:05:52.137 user 0m1.391s 00:05:52.137 sys 0m0.103s 00:05:52.137 10:28:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.137 10:28:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.137 ************************************ 00:05:52.137 END TEST event_reactor_perf 00:05:52.137 ************************************ 00:05:52.137 10:28:55 event -- event/event.sh@49 -- # uname -s 00:05:52.137 10:28:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:52.137 10:28:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.137 10:28:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.137 10:28:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.137 10:28:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.137 ************************************ 00:05:52.137 START TEST event_scheduler 00:05:52.137 ************************************ 00:05:52.137 10:28:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.137 * Looking for test storage... 00:05:52.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:52.137 10:28:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.137 10:28:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.137 10:28:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.137 10:28:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.397 10:28:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.397 --rc genhtml_branch_coverage=1 00:05:52.397 --rc genhtml_function_coverage=1 00:05:52.397 --rc genhtml_legend=1 00:05:52.397 --rc geninfo_all_blocks=1 00:05:52.397 --rc geninfo_unexecuted_blocks=1 00:05:52.397 00:05:52.397 ' 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.397 --rc genhtml_branch_coverage=1 00:05:52.397 --rc genhtml_function_coverage=1 00:05:52.397 --rc genhtml_legend=1 00:05:52.397 --rc geninfo_all_blocks=1 00:05:52.397 --rc geninfo_unexecuted_blocks=1 00:05:52.397 00:05:52.397 ' 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.397 --rc genhtml_branch_coverage=1 00:05:52.397 --rc genhtml_function_coverage=1 00:05:52.397 --rc genhtml_legend=1 00:05:52.397 --rc geninfo_all_blocks=1 00:05:52.397 --rc geninfo_unexecuted_blocks=1 00:05:52.397 00:05:52.397 ' 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.397 --rc genhtml_branch_coverage=1 00:05:52.397 --rc genhtml_function_coverage=1 00:05:52.397 --rc genhtml_legend=1 00:05:52.397 --rc geninfo_all_blocks=1 00:05:52.397 --rc geninfo_unexecuted_blocks=1 00:05:52.397 00:05:52.397 ' 00:05:52.397 10:28:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.397 10:28:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58357 00:05:52.397 10:28:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.397 10:28:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.397 10:28:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58357 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58357 ']' 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.397 10:28:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.397 [2024-11-20 10:28:55.733873] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:52.397 [2024-11-20 10:28:55.734099] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58357 ] 00:05:52.656 [2024-11-20 10:28:55.912456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.656 [2024-11-20 10:28:56.050045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.656 [2024-11-20 10:28:56.050293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.656 [2024-11-20 10:28:56.050304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.656 [2024-11-20 10:28:56.050256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:53.224 10:28:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.224 POWER: Cannot set governor of lcore 0 to userspace 00:05:53.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.224 POWER: Cannot set governor of lcore 0 to performance 00:05:53.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.224 POWER: Cannot set governor of lcore 0 to userspace 00:05:53.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.224 POWER: Cannot set governor of lcore 0 to userspace 00:05:53.224 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:53.224 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:53.224 POWER: Unable to set Power Management Environment for lcore 0 00:05:53.224 [2024-11-20 10:28:56.671162] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:53.224 [2024-11-20 10:28:56.671187] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:53.224 [2024-11-20 10:28:56.671198] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:53.224 [2024-11-20 10:28:56.671219] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:53.224 [2024-11-20 10:28:56.671228] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:53.224 [2024-11-20 10:28:56.671238] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.224 10:28:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.224 10:28:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 [2024-11-20 10:28:57.037521] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.792 10:28:57 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 10:28:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.792 10:28:57 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.792 10:28:57 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.792 10:28:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 ************************************ 00:05:53.792 START TEST scheduler_create_thread 00:05:53.792 ************************************ 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 2 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 3 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 4 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 5 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 6 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 7 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 8 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 9 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.793 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.728 10 00:05:54.728 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.728 10:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.728 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.728 10:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 10:28:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.106 10:28:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.107 10:28:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.107 10:28:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.107 10:28:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.673 10:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.673 10:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.673 10:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.673 10:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.611 10:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.611 10:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:57.611 10:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:57.611 10:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.611 10:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.179 ************************************ 00:05:58.179 END TEST scheduler_create_thread 00:05:58.179 ************************************ 00:05:58.179 10:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.179 00:05:58.179 real 0m4.385s 00:05:58.179 user 0m0.032s 00:05:58.179 sys 0m0.008s 00:05:58.179 10:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.179 10:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.179 10:29:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.179 10:29:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58357 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58357 ']' 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58357 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58357 00:05:58.179 killing process with pid 58357 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58357' 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58357 00:05:58.179 10:29:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58357 00:05:58.438 [2024-11-20 10:29:01.717373] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.816 00:05:59.816 real 0m7.616s 00:05:59.816 user 0m17.732s 00:05:59.816 sys 0m0.559s 00:05:59.816 10:29:03 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.816 ************************************ 00:05:59.816 END TEST event_scheduler 00:05:59.816 ************************************ 00:05:59.816 10:29:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.816 10:29:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.816 10:29:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.816 10:29:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.816 10:29:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.816 10:29:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.816 ************************************ 00:05:59.816 START TEST app_repeat 00:05:59.816 ************************************ 00:05:59.816 10:29:03 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58490 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.816 Process app_repeat pid: 58490 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58490' 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.816 spdk_app_start Round 0 00:05:59.816 10:29:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58490 /var/tmp/spdk-nbd.sock 00:05:59.816 10:29:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58490 ']' 00:05:59.816 10:29:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.816 10:29:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.817 10:29:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.817 10:29:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.817 10:29:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.817 [2024-11-20 10:29:03.155196] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:59.817 [2024-11-20 10:29:03.155349] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58490 ] 00:06:00.075 [2024-11-20 10:29:03.339781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.075 [2024-11-20 10:29:03.457545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.075 [2024-11-20 10:29:03.457576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.664 10:29:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.664 10:29:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.664 10:29:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.922 Malloc0 00:06:00.922 10:29:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.490 Malloc1 00:06:01.490 10:29:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.490 10:29:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.491 10:29:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.491 /dev/nbd0 00:06:01.750 10:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.750 10:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.750 10:29:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.750 10:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.750 10:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.750 10:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.751 1+0 records in 00:06:01.751 1+0 records out 00:06:01.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315617 s, 13.0 MB/s 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.751 10:29:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.751 10:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.751 10:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.751 10:29:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.010 /dev/nbd1 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.010 1+0 records in 00:06:02.010 1+0 records out 00:06:02.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004815 s, 8.5 MB/s 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.010 10:29:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.010 10:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.270 { 00:06:02.270 "nbd_device": "/dev/nbd0", 00:06:02.270 "bdev_name": "Malloc0" 00:06:02.270 }, 00:06:02.270 { 00:06:02.270 "nbd_device": "/dev/nbd1", 00:06:02.270 "bdev_name": "Malloc1" 00:06:02.270 } 00:06:02.270 ]' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.270 { 00:06:02.270 "nbd_device": "/dev/nbd0", 00:06:02.270 "bdev_name": "Malloc0" 00:06:02.270 }, 00:06:02.270 { 00:06:02.270 "nbd_device": "/dev/nbd1", 00:06:02.270 "bdev_name": "Malloc1" 00:06:02.270 } 00:06:02.270 ]' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.270 /dev/nbd1' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.270 /dev/nbd1' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.270 256+0 records in 00:06:02.270 256+0 records out 00:06:02.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013123 s, 79.9 MB/s 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.270 256+0 records in 00:06:02.270 256+0 records out 00:06:02.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243046 s, 43.1 MB/s 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.270 256+0 records in 00:06:02.270 256+0 records out 00:06:02.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276159 s, 38.0 MB/s 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.270 10:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.531 10:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.791 10:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.792 10:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.052 10:29:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.052 10:29:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.659 10:29:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.038 [2024-11-20 10:29:08.121611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.038 [2024-11-20 10:29:08.244621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.038 [2024-11-20 10:29:08.244628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.038 [2024-11-20 10:29:08.453045] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.038 [2024-11-20 10:29:08.453149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.944 spdk_app_start Round 1 00:06:06.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.944 10:29:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.944 10:29:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.945 10:29:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58490 /var/tmp/spdk-nbd.sock 00:06:06.945 10:29:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58490 ']' 00:06:06.945 10:29:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.945 10:29:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.945 10:29:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.945 10:29:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.945 10:29:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.945 10:29:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.945 10:29:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.945 10:29:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.204 Malloc0 00:06:07.204 10:29:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.464 Malloc1 00:06:07.464 10:29:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.464 10:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.729 /dev/nbd0 00:06:07.729 10:29:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.729 10:29:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.729 1+0 records in 00:06:07.729 1+0 records out 00:06:07.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295872 s, 13.8 MB/s 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.729 10:29:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.729 10:29:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.729 10:29:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.730 10:29:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.998 /dev/nbd1 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.998 1+0 records in 00:06:07.998 1+0 records out 00:06:07.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279665 s, 14.6 MB/s 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.998 10:29:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.998 10:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.263 { 00:06:08.263 "nbd_device": "/dev/nbd0", 00:06:08.263 "bdev_name": "Malloc0" 00:06:08.263 }, 00:06:08.263 { 00:06:08.263 "nbd_device": "/dev/nbd1", 00:06:08.263 "bdev_name": "Malloc1" 00:06:08.263 } 00:06:08.263 ]' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.263 { 00:06:08.263 "nbd_device": "/dev/nbd0", 00:06:08.263 "bdev_name": "Malloc0" 00:06:08.263 }, 00:06:08.263 { 00:06:08.263 "nbd_device": "/dev/nbd1", 00:06:08.263 "bdev_name": "Malloc1" 00:06:08.263 } 00:06:08.263 ]' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.263 /dev/nbd1' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.263 /dev/nbd1' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.263 256+0 records in 00:06:08.263 256+0 records out 00:06:08.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136341 s, 76.9 MB/s 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.263 256+0 records in 00:06:08.263 256+0 records out 00:06:08.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253524 s, 41.4 MB/s 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.263 256+0 records in 00:06:08.263 256+0 records out 00:06:08.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255436 s, 41.1 MB/s 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.263 10:29:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.264 10:29:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.264 10:29:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.264 10:29:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.264 10:29:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.264 10:29:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.264 10:29:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.523 10:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.783 10:29:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.042 10:29:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.042 10:29:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.611 10:29:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.002 [2024-11-20 10:29:14.158033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.002 [2024-11-20 10:29:14.273595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.002 [2024-11-20 10:29:14.273620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.262 [2024-11-20 10:29:14.485680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.262 [2024-11-20 10:29:14.485776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.641 spdk_app_start Round 2 00:06:12.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.641 10:29:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.641 10:29:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.641 10:29:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58490 /var/tmp/spdk-nbd.sock 00:06:12.641 10:29:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58490 ']' 00:06:12.641 10:29:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.641 10:29:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.641 10:29:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.641 10:29:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.641 10:29:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.900 10:29:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.900 10:29:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.900 10:29:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.159 Malloc0 00:06:13.159 10:29:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.419 Malloc1 00:06:13.419 10:29:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.419 10:29:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.678 /dev/nbd0 00:06:13.678 10:29:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.678 10:29:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.678 10:29:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.678 1+0 records in 00:06:13.678 1+0 records out 00:06:13.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205576 s, 19.9 MB/s 00:06:13.678 10:29:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.678 10:29:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.678 10:29:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.678 10:29:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.678 10:29:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.678 10:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.678 10:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.678 10:29:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.936 /dev/nbd1 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.936 1+0 records in 00:06:13.936 1+0 records out 00:06:13.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432712 s, 9.5 MB/s 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.936 10:29:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.936 10:29:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.195 { 00:06:14.195 "nbd_device": "/dev/nbd0", 00:06:14.195 "bdev_name": "Malloc0" 00:06:14.195 }, 00:06:14.195 { 00:06:14.195 "nbd_device": "/dev/nbd1", 00:06:14.195 "bdev_name": "Malloc1" 00:06:14.195 } 00:06:14.195 ]' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.195 { 00:06:14.195 "nbd_device": "/dev/nbd0", 00:06:14.195 "bdev_name": "Malloc0" 00:06:14.195 }, 00:06:14.195 { 00:06:14.195 "nbd_device": "/dev/nbd1", 00:06:14.195 "bdev_name": "Malloc1" 00:06:14.195 } 00:06:14.195 ]' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.195 /dev/nbd1' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.195 /dev/nbd1' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.195 256+0 records in 00:06:14.195 256+0 records out 00:06:14.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125709 s, 83.4 MB/s 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.195 256+0 records in 00:06:14.195 256+0 records out 00:06:14.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230599 s, 45.5 MB/s 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.195 256+0 records in 00:06:14.195 256+0 records out 00:06:14.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022918 s, 45.8 MB/s 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.195 10:29:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.453 10:29:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.712 10:29:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.972 10:29:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.972 10:29:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.540 10:29:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.917 [2024-11-20 10:29:20.144149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.917 [2024-11-20 10:29:20.266371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.917 [2024-11-20 10:29:20.266386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.176 [2024-11-20 10:29:20.478956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.176 [2024-11-20 10:29:20.479073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.598 10:29:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58490 /var/tmp/spdk-nbd.sock 00:06:18.598 10:29:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58490 ']' 00:06:18.598 10:29:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.598 10:29:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.598 10:29:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.598 10:29:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.598 10:29:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.857 10:29:22 event.app_repeat -- event/event.sh@39 -- # killprocess 58490 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58490 ']' 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58490 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58490 00:06:18.857 killing process with pid 58490 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58490' 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58490 00:06:18.857 10:29:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58490 00:06:19.794 spdk_app_start is called in Round 0. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:06:19.794 spdk_app_start is called in Round 1. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:06:19.794 spdk_app_start is called in Round 2. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:06:19.794 spdk_app_start is called in Round 3. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 10:29:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.794 10:29:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.794 00:06:19.794 real 0m20.176s 00:06:19.794 user 0m43.580s 00:06:19.794 sys 0m2.922s 00:06:19.794 ************************************ 00:06:19.794 END TEST app_repeat 00:06:19.794 ************************************ 00:06:19.794 10:29:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.794 10:29:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.053 10:29:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.053 10:29:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:20.053 10:29:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.053 10:29:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.053 10:29:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.053 ************************************ 00:06:20.053 START TEST cpu_locks 00:06:20.053 ************************************ 00:06:20.054 10:29:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:20.054 * Looking for test storage... 00:06:20.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:20.054 10:29:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.054 10:29:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.054 10:29:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.054 10:29:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.054 10:29:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.313 10:29:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.313 --rc genhtml_branch_coverage=1 00:06:20.313 --rc genhtml_function_coverage=1 00:06:20.313 --rc genhtml_legend=1 00:06:20.313 --rc geninfo_all_blocks=1 00:06:20.313 --rc geninfo_unexecuted_blocks=1 00:06:20.313 00:06:20.313 ' 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.313 --rc genhtml_branch_coverage=1 00:06:20.313 --rc genhtml_function_coverage=1 00:06:20.313 --rc genhtml_legend=1 00:06:20.313 --rc geninfo_all_blocks=1 00:06:20.313 --rc geninfo_unexecuted_blocks=1 00:06:20.313 00:06:20.313 ' 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.313 --rc genhtml_branch_coverage=1 00:06:20.313 --rc genhtml_function_coverage=1 00:06:20.313 --rc genhtml_legend=1 00:06:20.313 --rc geninfo_all_blocks=1 00:06:20.313 --rc geninfo_unexecuted_blocks=1 00:06:20.313 00:06:20.313 ' 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.313 --rc genhtml_branch_coverage=1 00:06:20.313 --rc genhtml_function_coverage=1 00:06:20.313 --rc genhtml_legend=1 00:06:20.313 --rc geninfo_all_blocks=1 00:06:20.313 --rc geninfo_unexecuted_blocks=1 00:06:20.313 00:06:20.313 ' 00:06:20.313 10:29:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.313 10:29:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.313 10:29:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.313 10:29:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.313 10:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.313 ************************************ 00:06:20.313 START TEST default_locks 00:06:20.313 ************************************ 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58943 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58943 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58943 ']' 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.313 10:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.313 [2024-11-20 10:29:23.655427] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:20.313 [2024-11-20 10:29:23.655578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58943 ] 00:06:20.572 [2024-11-20 10:29:23.834935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.572 [2024-11-20 10:29:23.954776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.506 10:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.506 10:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:21.506 10:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58943 00:06:21.506 10:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58943 00:06:21.506 10:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58943 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58943 ']' 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58943 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58943 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.764 killing process with pid 58943 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58943' 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58943 00:06:21.764 10:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58943 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58943 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58943 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58943 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58943 ']' 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 ERROR: process (pid: 58943) is no longer running 00:06:24.296 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58943) - No such process 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.296 00:06:24.296 real 0m4.140s 00:06:24.296 user 0m4.064s 00:06:24.296 sys 0m0.610s 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.296 10:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 ************************************ 00:06:24.296 END TEST default_locks 00:06:24.296 ************************************ 00:06:24.296 10:29:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.296 10:29:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.296 10:29:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.296 10:29:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 ************************************ 00:06:24.296 START TEST default_locks_via_rpc 00:06:24.296 ************************************ 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59020 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59020 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59020 ']' 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.296 10:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.556 [2024-11-20 10:29:27.856605] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:24.556 [2024-11-20 10:29:27.856745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59020 ] 00:06:24.556 [2024-11-20 10:29:28.017167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.815 [2024-11-20 10:29:28.145556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59020 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.753 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59020 00:06:26.011 10:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59020 00:06:26.011 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59020 ']' 00:06:26.011 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59020 00:06:26.011 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:26.011 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.011 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59020 00:06:26.269 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.269 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.269 killing process with pid 59020 00:06:26.269 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59020' 00:06:26.269 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59020 00:06:26.269 10:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59020 00:06:28.812 00:06:28.812 real 0m4.261s 00:06:28.812 user 0m4.227s 00:06:28.812 sys 0m0.648s 00:06:28.813 10:29:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.813 10:29:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.813 ************************************ 00:06:28.813 END TEST default_locks_via_rpc 00:06:28.813 ************************************ 00:06:28.813 10:29:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.813 10:29:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.813 10:29:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.813 10:29:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.813 ************************************ 00:06:28.813 START TEST non_locking_app_on_locked_coremask 00:06:28.813 ************************************ 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59100 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59100 /var/tmp/spdk.sock 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59100 ']' 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.813 10:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.813 [2024-11-20 10:29:32.181043] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:28.813 [2024-11-20 10:29:32.181169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:06:29.070 [2024-11-20 10:29:32.355249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.070 [2024-11-20 10:29:32.476099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59116 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59116 /var/tmp/spdk2.sock 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59116 ']' 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.004 10:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.263 [2024-11-20 10:29:33.497619] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:30.263 [2024-11-20 10:29:33.497741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:06:30.263 [2024-11-20 10:29:33.673091] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.263 [2024-11-20 10:29:33.673169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.521 [2024-11-20 10:29:33.919737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.096 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.096 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.096 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59100 00:06:33.096 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59100 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59100 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59100 ']' 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59100 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59100 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.097 killing process with pid 59100 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59100' 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59100 00:06:33.097 10:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59100 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59116 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59116 ']' 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59116 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59116 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.372 killing process with pid 59116 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59116' 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59116 00:06:38.372 10:29:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59116 00:06:40.911 00:06:40.911 real 0m12.008s 00:06:40.911 user 0m12.236s 00:06:40.911 sys 0m1.243s 00:06:40.911 10:29:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.911 10:29:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.911 ************************************ 00:06:40.911 END TEST non_locking_app_on_locked_coremask 00:06:40.911 ************************************ 00:06:40.911 10:29:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:40.911 10:29:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.911 10:29:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.911 10:29:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.911 ************************************ 00:06:40.911 START TEST locking_app_on_unlocked_coremask 00:06:40.911 ************************************ 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59267 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59267 /var/tmp/spdk.sock 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59267 ']' 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.911 10:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.911 [2024-11-20 10:29:44.263082] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:40.911 [2024-11-20 10:29:44.263267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:06:41.171 [2024-11-20 10:29:44.447170] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.171 [2024-11-20 10:29:44.447229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.171 [2024-11-20 10:29:44.569215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59289 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59289 /var/tmp/spdk2.sock 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59289 ']' 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.111 10:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.371 [2024-11-20 10:29:45.589616] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:42.371 [2024-11-20 10:29:45.589749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59289 ] 00:06:42.371 [2024-11-20 10:29:45.764029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.631 [2024-11-20 10:29:46.006473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59289 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59289 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59267 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59267 ']' 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59267 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.170 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59267 00:06:45.429 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.429 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.429 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59267' 00:06:45.429 killing process with pid 59267 00:06:45.429 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59267 00:06:45.429 10:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59267 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59289 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59289 ']' 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59289 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59289 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59289' 00:06:50.711 killing process with pid 59289 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59289 00:06:50.711 10:29:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59289 00:06:53.248 00:06:53.248 real 0m12.083s 00:06:53.248 user 0m12.444s 00:06:53.248 sys 0m1.222s 00:06:53.248 10:29:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.248 10:29:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.248 ************************************ 00:06:53.248 END TEST locking_app_on_unlocked_coremask 00:06:53.249 ************************************ 00:06:53.249 10:29:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:53.249 10:29:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.249 10:29:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.249 10:29:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.249 ************************************ 00:06:53.249 START TEST locking_app_on_locked_coremask 00:06:53.249 ************************************ 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59444 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59444 /var/tmp/spdk.sock 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59444 ']' 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.249 10:29:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.249 [2024-11-20 10:29:56.395067] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:53.249 [2024-11-20 10:29:56.395193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59444 ] 00:06:53.249 [2024-11-20 10:29:56.573375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.249 [2024-11-20 10:29:56.699731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59466 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59466 /var/tmp/spdk2.sock 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59466 /var/tmp/spdk2.sock 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59466 /var/tmp/spdk2.sock 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59466 ']' 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.204 10:29:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.467 [2024-11-20 10:29:57.706622] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:54.467 [2024-11-20 10:29:57.706761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59466 ] 00:06:54.467 [2024-11-20 10:29:57.881880] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59444 has claimed it. 00:06:54.467 [2024-11-20 10:29:57.881954] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.035 ERROR: process (pid: 59466) is no longer running 00:06:55.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59466) - No such process 00:06:55.035 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.035 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:55.035 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:55.035 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.035 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.035 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.036 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59444 00:06:55.036 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59444 00:06:55.036 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.313 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59444 00:06:55.313 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59444 ']' 00:06:55.313 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59444 00:06:55.313 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59444 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59444' 00:06:55.314 killing process with pid 59444 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59444 00:06:55.314 10:29:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59444 00:06:57.850 00:06:57.850 real 0m4.896s 00:06:57.850 user 0m5.123s 00:06:57.850 sys 0m0.736s 00:06:57.850 10:30:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.850 10:30:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.850 ************************************ 00:06:57.850 END TEST locking_app_on_locked_coremask 00:06:57.850 ************************************ 00:06:57.850 10:30:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.850 10:30:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.850 10:30:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.850 10:30:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.850 ************************************ 00:06:57.850 START TEST locking_overlapped_coremask 00:06:57.850 ************************************ 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59530 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59530 /var/tmp/spdk.sock 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59530 ']' 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.850 10:30:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.111 [2024-11-20 10:30:01.345343] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:58.111 [2024-11-20 10:30:01.345467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59530 ] 00:06:58.111 [2024-11-20 10:30:01.517532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.371 [2024-11-20 10:30:01.642844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.371 [2024-11-20 10:30:01.642851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.371 [2024-11-20 10:30:01.642861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59553 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59553 /var/tmp/spdk2.sock 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59553 /var/tmp/spdk2.sock 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:59.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59553 /var/tmp/spdk2.sock 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59553 ']' 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.307 10:30:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.307 [2024-11-20 10:30:02.651072] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:59.307 [2024-11-20 10:30:02.651207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:06:59.566 [2024-11-20 10:30:02.826530] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59530 has claimed it. 00:06:59.566 [2024-11-20 10:30:02.826595] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.825 ERROR: process (pid: 59553) is no longer running 00:06:59.825 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59553) - No such process 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59530 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59530 ']' 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59530 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.825 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59530 00:07:00.085 killing process with pid 59530 00:07:00.085 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.085 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.085 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59530' 00:07:00.085 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59530 00:07:00.085 10:30:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59530 00:07:02.622 00:07:02.622 real 0m4.597s 00:07:02.622 user 0m12.547s 00:07:02.622 sys 0m0.576s 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.622 ************************************ 00:07:02.622 END TEST locking_overlapped_coremask 00:07:02.622 ************************************ 00:07:02.622 10:30:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.622 10:30:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.622 10:30:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.622 10:30:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.622 ************************************ 00:07:02.622 START TEST locking_overlapped_coremask_via_rpc 00:07:02.622 ************************************ 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59623 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59623 /var/tmp/spdk.sock 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59623 ']' 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.622 10:30:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.622 [2024-11-20 10:30:06.006186] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:02.622 [2024-11-20 10:30:06.006323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59623 ] 00:07:02.880 [2024-11-20 10:30:06.184256] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.880 [2024-11-20 10:30:06.184319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.880 [2024-11-20 10:30:06.311030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.880 [2024-11-20 10:30:06.311188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.880 [2024-11-20 10:30:06.311224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59641 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59641 /var/tmp/spdk2.sock 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59641 ']' 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.816 10:30:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.074 [2024-11-20 10:30:07.361467] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:04.074 [2024-11-20 10:30:07.361607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59641 ] 00:07:04.074 [2024-11-20 10:30:07.542333] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.074 [2024-11-20 10:30:07.546421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.664 [2024-11-20 10:30:07.819978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.664 [2024-11-20 10:30:07.820066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.664 [2024-11-20 10:30:07.820099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.569 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 [2024-11-20 10:30:10.048547] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59623 has claimed it. 00:07:06.827 request: 00:07:06.827 { 00:07:06.827 "method": "framework_enable_cpumask_locks", 00:07:06.827 "req_id": 1 00:07:06.827 } 00:07:06.827 Got JSON-RPC error response 00:07:06.827 response: 00:07:06.827 { 00:07:06.827 "code": -32603, 00:07:06.827 "message": "Failed to claim CPU core: 2" 00:07:06.827 } 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59623 /var/tmp/spdk.sock 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59623 ']' 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.827 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59641 /var/tmp/spdk2.sock 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59641 ']' 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.086 00:07:07.086 real 0m4.650s 00:07:07.086 user 0m1.457s 00:07:07.086 sys 0m0.228s 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.086 10:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.086 ************************************ 00:07:07.086 END TEST locking_overlapped_coremask_via_rpc 00:07:07.086 ************************************ 00:07:07.353 10:30:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:07.353 10:30:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59623 ]] 00:07:07.353 10:30:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59623 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59623 ']' 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59623 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59623 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.353 killing process with pid 59623 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59623' 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59623 00:07:07.353 10:30:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59623 00:07:10.637 10:30:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59641 ]] 00:07:10.637 10:30:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59641 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59641 ']' 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59641 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59641 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:10.637 killing process with pid 59641 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59641' 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59641 00:07:10.637 10:30:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59641 00:07:13.167 10:30:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.168 10:30:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:13.168 10:30:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59623 ]] 00:07:13.168 10:30:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59623 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59623 ']' 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59623 00:07:13.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59623) - No such process 00:07:13.168 Process with pid 59623 is not found 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59623 is not found' 00:07:13.168 10:30:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59641 ]] 00:07:13.168 10:30:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59641 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59641 ']' 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59641 00:07:13.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59641) - No such process 00:07:13.168 Process with pid 59641 is not found 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59641 is not found' 00:07:13.168 10:30:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.168 00:07:13.168 real 0m52.884s 00:07:13.168 user 1m31.959s 00:07:13.168 sys 0m6.502s 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.168 10:30:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.168 ************************************ 00:07:13.168 END TEST cpu_locks 00:07:13.168 ************************************ 00:07:13.168 00:07:13.168 real 1m26.179s 00:07:13.168 user 2m40.684s 00:07:13.168 sys 0m10.744s 00:07:13.168 10:30:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.168 10:30:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.168 ************************************ 00:07:13.168 END TEST event 00:07:13.168 ************************************ 00:07:13.168 10:30:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.168 10:30:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.168 10:30:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.168 10:30:16 -- common/autotest_common.sh@10 -- # set +x 00:07:13.168 ************************************ 00:07:13.168 START TEST thread 00:07:13.168 ************************************ 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.168 * Looking for test storage... 00:07:13.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.168 10:30:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.168 10:30:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.168 10:30:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.168 10:30:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.168 10:30:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.168 10:30:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.168 10:30:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.168 10:30:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.168 10:30:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.168 10:30:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.168 10:30:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.168 10:30:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:13.168 10:30:16 thread -- scripts/common.sh@345 -- # : 1 00:07:13.168 10:30:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.168 10:30:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.168 10:30:16 thread -- scripts/common.sh@365 -- # decimal 1 00:07:13.168 10:30:16 thread -- scripts/common.sh@353 -- # local d=1 00:07:13.168 10:30:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.168 10:30:16 thread -- scripts/common.sh@355 -- # echo 1 00:07:13.168 10:30:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.168 10:30:16 thread -- scripts/common.sh@366 -- # decimal 2 00:07:13.168 10:30:16 thread -- scripts/common.sh@353 -- # local d=2 00:07:13.168 10:30:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.168 10:30:16 thread -- scripts/common.sh@355 -- # echo 2 00:07:13.168 10:30:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.168 10:30:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.168 10:30:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.168 10:30:16 thread -- scripts/common.sh@368 -- # return 0 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.168 --rc genhtml_branch_coverage=1 00:07:13.168 --rc genhtml_function_coverage=1 00:07:13.168 --rc genhtml_legend=1 00:07:13.168 --rc geninfo_all_blocks=1 00:07:13.168 --rc geninfo_unexecuted_blocks=1 00:07:13.168 00:07:13.168 ' 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.168 --rc genhtml_branch_coverage=1 00:07:13.168 --rc genhtml_function_coverage=1 00:07:13.168 --rc genhtml_legend=1 00:07:13.168 --rc geninfo_all_blocks=1 00:07:13.168 --rc geninfo_unexecuted_blocks=1 00:07:13.168 00:07:13.168 ' 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.168 --rc genhtml_branch_coverage=1 00:07:13.168 --rc genhtml_function_coverage=1 00:07:13.168 --rc genhtml_legend=1 00:07:13.168 --rc geninfo_all_blocks=1 00:07:13.168 --rc geninfo_unexecuted_blocks=1 00:07:13.168 00:07:13.168 ' 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.168 --rc genhtml_branch_coverage=1 00:07:13.168 --rc genhtml_function_coverage=1 00:07:13.168 --rc genhtml_legend=1 00:07:13.168 --rc geninfo_all_blocks=1 00:07:13.168 --rc geninfo_unexecuted_blocks=1 00:07:13.168 00:07:13.168 ' 00:07:13.168 10:30:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.168 10:30:16 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.168 ************************************ 00:07:13.168 START TEST thread_poller_perf 00:07:13.168 ************************************ 00:07:13.168 10:30:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.168 [2024-11-20 10:30:16.590062] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:13.168 [2024-11-20 10:30:16.590201] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59847 ] 00:07:13.427 [2024-11-20 10:30:16.767088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.427 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:13.427 [2024-11-20 10:30:16.891206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.804 [2024-11-20T10:30:18.283Z] ====================================== 00:07:14.804 [2024-11-20T10:30:18.283Z] busy:2299385144 (cyc) 00:07:14.804 [2024-11-20T10:30:18.283Z] total_run_count: 371000 00:07:14.804 [2024-11-20T10:30:18.283Z] tsc_hz: 2290000000 (cyc) 00:07:14.804 [2024-11-20T10:30:18.283Z] ====================================== 00:07:14.804 [2024-11-20T10:30:18.283Z] poller_cost: 6197 (cyc), 2706 (nsec) 00:07:14.804 00:07:14.804 real 0m1.594s 00:07:14.804 user 0m1.392s 00:07:14.804 sys 0m0.094s 00:07:14.804 10:30:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.804 10:30:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.804 ************************************ 00:07:14.804 END TEST thread_poller_perf 00:07:14.804 ************************************ 00:07:14.804 10:30:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.804 10:30:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:14.804 10:30:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.804 10:30:18 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.804 ************************************ 00:07:14.804 START TEST thread_poller_perf 00:07:14.804 ************************************ 00:07:14.804 10:30:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.804 [2024-11-20 10:30:18.240041] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:14.804 [2024-11-20 10:30:18.240160] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59884 ] 00:07:15.062 [2024-11-20 10:30:18.416897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.321 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:15.321 [2024-11-20 10:30:18.551309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.696 [2024-11-20T10:30:20.175Z] ====================================== 00:07:16.696 [2024-11-20T10:30:20.175Z] busy:2293969076 (cyc) 00:07:16.696 [2024-11-20T10:30:20.175Z] total_run_count: 4515000 00:07:16.696 [2024-11-20T10:30:20.175Z] tsc_hz: 2290000000 (cyc) 00:07:16.696 [2024-11-20T10:30:20.175Z] ====================================== 00:07:16.696 [2024-11-20T10:30:20.175Z] poller_cost: 508 (cyc), 221 (nsec) 00:07:16.696 00:07:16.696 real 0m1.620s 00:07:16.696 user 0m1.418s 00:07:16.696 sys 0m0.093s 00:07:16.696 10:30:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.696 10:30:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.696 ************************************ 00:07:16.696 END TEST thread_poller_perf 00:07:16.696 ************************************ 00:07:16.696 10:30:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:16.696 00:07:16.696 real 0m3.542s 00:07:16.696 user 0m2.955s 00:07:16.696 sys 0m0.388s 00:07:16.696 10:30:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.696 10:30:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.696 ************************************ 00:07:16.696 END TEST thread 00:07:16.696 ************************************ 00:07:16.696 10:30:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:16.696 10:30:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:16.696 10:30:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.697 10:30:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.697 10:30:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.697 ************************************ 00:07:16.697 START TEST app_cmdline 00:07:16.697 ************************************ 00:07:16.697 10:30:19 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:16.697 * Looking for test storage... 00:07:16.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.697 10:30:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.697 --rc genhtml_branch_coverage=1 00:07:16.697 --rc genhtml_function_coverage=1 00:07:16.697 --rc genhtml_legend=1 00:07:16.697 --rc geninfo_all_blocks=1 00:07:16.697 --rc geninfo_unexecuted_blocks=1 00:07:16.697 00:07:16.697 ' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.697 --rc genhtml_branch_coverage=1 00:07:16.697 --rc genhtml_function_coverage=1 00:07:16.697 --rc genhtml_legend=1 00:07:16.697 --rc geninfo_all_blocks=1 00:07:16.697 --rc geninfo_unexecuted_blocks=1 00:07:16.697 00:07:16.697 ' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.697 --rc genhtml_branch_coverage=1 00:07:16.697 --rc genhtml_function_coverage=1 00:07:16.697 --rc genhtml_legend=1 00:07:16.697 --rc geninfo_all_blocks=1 00:07:16.697 --rc geninfo_unexecuted_blocks=1 00:07:16.697 00:07:16.697 ' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.697 --rc genhtml_branch_coverage=1 00:07:16.697 --rc genhtml_function_coverage=1 00:07:16.697 --rc genhtml_legend=1 00:07:16.697 --rc geninfo_all_blocks=1 00:07:16.697 --rc geninfo_unexecuted_blocks=1 00:07:16.697 00:07:16.697 ' 00:07:16.697 10:30:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:16.697 10:30:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:16.697 10:30:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59967 00:07:16.697 10:30:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59967 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59967 ']' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.697 10:30:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.956 [2024-11-20 10:30:20.239712] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:16.956 [2024-11-20 10:30:20.239842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:07:16.956 [2024-11-20 10:30:20.425846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.215 [2024-11-20 10:30:20.573859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.151 10:30:21 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.151 10:30:21 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:18.151 10:30:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:18.409 { 00:07:18.409 "version": "SPDK v25.01-pre git sha1 097badaeb", 00:07:18.409 "fields": { 00:07:18.409 "major": 25, 00:07:18.409 "minor": 1, 00:07:18.409 "patch": 0, 00:07:18.409 "suffix": "-pre", 00:07:18.409 "commit": "097badaeb" 00:07:18.409 } 00:07:18.409 } 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:18.409 10:30:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:18.409 10:30:21 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.705 request: 00:07:18.705 { 00:07:18.705 "method": "env_dpdk_get_mem_stats", 00:07:18.705 "req_id": 1 00:07:18.705 } 00:07:18.705 Got JSON-RPC error response 00:07:18.705 response: 00:07:18.705 { 00:07:18.705 "code": -32601, 00:07:18.705 "message": "Method not found" 00:07:18.705 } 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.705 10:30:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59967 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59967 ']' 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59967 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.705 10:30:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59967 00:07:18.705 10:30:22 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.705 10:30:22 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.705 killing process with pid 59967 00:07:18.705 10:30:22 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59967' 00:07:18.705 10:30:22 app_cmdline -- common/autotest_common.sh@973 -- # kill 59967 00:07:18.705 10:30:22 app_cmdline -- common/autotest_common.sh@978 -- # wait 59967 00:07:21.257 00:07:21.257 real 0m4.799s 00:07:21.257 user 0m5.046s 00:07:21.257 sys 0m0.616s 00:07:21.257 10:30:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.257 10:30:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.257 ************************************ 00:07:21.257 END TEST app_cmdline 00:07:21.257 ************************************ 00:07:21.515 10:30:24 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.515 10:30:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.515 10:30:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.515 10:30:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.515 ************************************ 00:07:21.515 START TEST version 00:07:21.515 ************************************ 00:07:21.515 10:30:24 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.515 * Looking for test storage... 00:07:21.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:21.515 10:30:24 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.515 10:30:24 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.515 10:30:24 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.774 10:30:24 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.774 10:30:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.774 10:30:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.774 10:30:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.775 10:30:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.775 10:30:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.775 10:30:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.775 10:30:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.775 10:30:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.775 10:30:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.775 10:30:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.775 10:30:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.775 10:30:25 version -- scripts/common.sh@344 -- # case "$op" in 00:07:21.775 10:30:25 version -- scripts/common.sh@345 -- # : 1 00:07:21.775 10:30:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.775 10:30:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.775 10:30:25 version -- scripts/common.sh@365 -- # decimal 1 00:07:21.775 10:30:25 version -- scripts/common.sh@353 -- # local d=1 00:07:21.775 10:30:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.775 10:30:25 version -- scripts/common.sh@355 -- # echo 1 00:07:21.775 10:30:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.775 10:30:25 version -- scripts/common.sh@366 -- # decimal 2 00:07:21.775 10:30:25 version -- scripts/common.sh@353 -- # local d=2 00:07:21.775 10:30:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.775 10:30:25 version -- scripts/common.sh@355 -- # echo 2 00:07:21.775 10:30:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.775 10:30:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.775 10:30:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.775 10:30:25 version -- scripts/common.sh@368 -- # return 0 00:07:21.775 10:30:25 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.775 10:30:25 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.775 --rc genhtml_branch_coverage=1 00:07:21.775 --rc genhtml_function_coverage=1 00:07:21.775 --rc genhtml_legend=1 00:07:21.775 --rc geninfo_all_blocks=1 00:07:21.775 --rc geninfo_unexecuted_blocks=1 00:07:21.775 00:07:21.775 ' 00:07:21.775 10:30:25 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.775 --rc genhtml_branch_coverage=1 00:07:21.775 --rc genhtml_function_coverage=1 00:07:21.775 --rc genhtml_legend=1 00:07:21.775 --rc geninfo_all_blocks=1 00:07:21.775 --rc geninfo_unexecuted_blocks=1 00:07:21.775 00:07:21.775 ' 00:07:21.775 10:30:25 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.775 --rc genhtml_branch_coverage=1 00:07:21.775 --rc genhtml_function_coverage=1 00:07:21.775 --rc genhtml_legend=1 00:07:21.775 --rc geninfo_all_blocks=1 00:07:21.775 --rc geninfo_unexecuted_blocks=1 00:07:21.775 00:07:21.775 ' 00:07:21.775 10:30:25 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.775 --rc genhtml_branch_coverage=1 00:07:21.775 --rc genhtml_function_coverage=1 00:07:21.775 --rc genhtml_legend=1 00:07:21.775 --rc geninfo_all_blocks=1 00:07:21.775 --rc geninfo_unexecuted_blocks=1 00:07:21.775 00:07:21.775 ' 00:07:21.775 10:30:25 version -- app/version.sh@17 -- # get_header_version major 00:07:21.775 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.775 10:30:25 version -- app/version.sh@17 -- # major=25 00:07:21.775 10:30:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:07:21.775 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.775 10:30:25 version -- app/version.sh@18 -- # minor=1 00:07:21.775 10:30:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.775 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:07:21.775 10:30:25 version -- app/version.sh@19 -- # patch=0 00:07:21.775 10:30:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:07:21.775 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.775 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.775 10:30:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:21.775 10:30:25 version -- app/version.sh@22 -- # version=25.1 00:07:21.775 10:30:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:21.775 10:30:25 version -- app/version.sh@28 -- # version=25.1rc0 00:07:21.775 10:30:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:21.775 10:30:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:21.775 10:30:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:21.775 10:30:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:21.775 ************************************ 00:07:21.775 END TEST version 00:07:21.775 ************************************ 00:07:21.775 00:07:21.775 real 0m0.334s 00:07:21.775 user 0m0.199s 00:07:21.775 sys 0m0.179s 00:07:21.775 10:30:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.775 10:30:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:21.775 10:30:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:21.775 10:30:25 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:21.775 10:30:25 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:21.775 10:30:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.775 10:30:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.775 10:30:25 -- common/autotest_common.sh@10 -- # set +x 00:07:21.775 ************************************ 00:07:21.775 START TEST bdev_raid 00:07:21.775 ************************************ 00:07:21.775 10:30:25 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:22.034 * Looking for test storage... 00:07:22.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.034 10:30:25 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.034 --rc genhtml_branch_coverage=1 00:07:22.034 --rc genhtml_function_coverage=1 00:07:22.034 --rc genhtml_legend=1 00:07:22.034 --rc geninfo_all_blocks=1 00:07:22.034 --rc geninfo_unexecuted_blocks=1 00:07:22.034 00:07:22.034 ' 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.034 --rc genhtml_branch_coverage=1 00:07:22.034 --rc genhtml_function_coverage=1 00:07:22.034 --rc genhtml_legend=1 00:07:22.034 --rc geninfo_all_blocks=1 00:07:22.034 --rc geninfo_unexecuted_blocks=1 00:07:22.034 00:07:22.034 ' 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.034 --rc genhtml_branch_coverage=1 00:07:22.034 --rc genhtml_function_coverage=1 00:07:22.034 --rc genhtml_legend=1 00:07:22.034 --rc geninfo_all_blocks=1 00:07:22.034 --rc geninfo_unexecuted_blocks=1 00:07:22.034 00:07:22.034 ' 00:07:22.034 10:30:25 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.034 --rc genhtml_branch_coverage=1 00:07:22.034 --rc genhtml_function_coverage=1 00:07:22.034 --rc genhtml_legend=1 00:07:22.034 --rc geninfo_all_blocks=1 00:07:22.034 --rc geninfo_unexecuted_blocks=1 00:07:22.034 00:07:22.034 ' 00:07:22.034 10:30:25 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:22.035 10:30:25 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:22.035 10:30:25 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:22.035 10:30:25 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:22.035 10:30:25 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:22.035 10:30:25 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:22.035 10:30:25 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:22.035 10:30:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.035 10:30:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.035 10:30:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.035 ************************************ 00:07:22.035 START TEST raid1_resize_data_offset_test 00:07:22.035 ************************************ 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60166 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60166' 00:07:22.035 Process raid pid: 60166 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60166 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60166 ']' 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.035 10:30:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.292 [2024-11-20 10:30:25.533408] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:22.292 [2024-11-20 10:30:25.533644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.292 [2024-11-20 10:30:25.715051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.551 [2024-11-20 10:30:25.844058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.809 [2024-11-20 10:30:26.070244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.809 [2024-11-20 10:30:26.070400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.067 malloc0 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.067 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.325 malloc1 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.325 null0 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.325 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.325 [2024-11-20 10:30:26.586781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:23.325 [2024-11-20 10:30:26.588904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:23.325 [2024-11-20 10:30:26.588956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:23.325 [2024-11-20 10:30:26.589130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:23.325 [2024-11-20 10:30:26.589146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:23.325 [2024-11-20 10:30:26.589485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:23.326 [2024-11-20 10:30:26.589685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:23.326 [2024-11-20 10:30:26.589707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:23.326 [2024-11-20 10:30:26.589948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.326 [2024-11-20 10:30:26.646743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.326 10:30:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.893 malloc2 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.893 [2024-11-20 10:30:27.212826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:23.893 [2024-11-20 10:30:27.232063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.893 [2024-11-20 10:30:27.234175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60166 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60166 ']' 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60166 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60166 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60166' 00:07:23.893 killing process with pid 60166 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60166 00:07:23.893 10:30:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60166 00:07:23.893 [2024-11-20 10:30:27.312299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.893 [2024-11-20 10:30:27.313482] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:23.893 [2024-11-20 10:30:27.313549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.893 [2024-11-20 10:30:27.313568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:23.893 [2024-11-20 10:30:27.357813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.893 [2024-11-20 10:30:27.358213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.893 [2024-11-20 10:30:27.358233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:26.425 [2024-11-20 10:30:29.405172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.359 10:30:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:27.359 00:07:27.359 real 0m5.144s 00:07:27.359 user 0m5.102s 00:07:27.359 sys 0m0.521s 00:07:27.359 10:30:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.359 ************************************ 00:07:27.359 END TEST raid1_resize_data_offset_test 00:07:27.359 ************************************ 00:07:27.359 10:30:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.359 10:30:30 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:27.359 10:30:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.359 10:30:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.359 10:30:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.359 ************************************ 00:07:27.359 START TEST raid0_resize_superblock_test 00:07:27.359 ************************************ 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60255 00:07:27.359 Process raid pid: 60255 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60255' 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60255 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60255 ']' 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.359 10:30:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.359 [2024-11-20 10:30:30.730857] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:27.359 [2024-11-20 10:30:30.730990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.617 [2024-11-20 10:30:30.893879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.617 [2024-11-20 10:30:31.018466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.875 [2024-11-20 10:30:31.233806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.875 [2024-11-20 10:30:31.233855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.444 10:30:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.444 10:30:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:28.444 10:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:28.444 10:30:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.444 10:30:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.059 malloc0 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.059 [2024-11-20 10:30:32.226834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:29.059 [2024-11-20 10:30:32.226925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.059 [2024-11-20 10:30:32.226954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:29.059 [2024-11-20 10:30:32.226967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.059 [2024-11-20 10:30:32.229449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.059 [2024-11-20 10:30:32.229496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:29.059 pt0 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.059 582f5ab0-d7f8-4545-bc40-9a0510a1fce3 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.059 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.059 68089cea-77cc-4c98-99de-688d9db65905 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 58238458-9581-4828-8c9d-622b1926c6b5 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 [2024-11-20 10:30:32.360976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 68089cea-77cc-4c98-99de-688d9db65905 is claimed 00:07:29.060 [2024-11-20 10:30:32.361228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 58238458-9581-4828-8c9d-622b1926c6b5 is claimed 00:07:29.060 [2024-11-20 10:30:32.361501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.060 [2024-11-20 10:30:32.361568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:29.060 [2024-11-20 10:30:32.361953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.060 [2024-11-20 10:30:32.362237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.060 [2024-11-20 10:30:32.362289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:29.060 [2024-11-20 10:30:32.362586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:29.060 [2024-11-20 10:30:32.465040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 [2024-11-20 10:30:32.512936] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:29.060 [2024-11-20 10:30:32.512971] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '68089cea-77cc-4c98-99de-688d9db65905' was resized: old size 131072, new size 204800 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 [2024-11-20 10:30:32.524804] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:29.060 [2024-11-20 10:30:32.524835] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '58238458-9581-4828-8c9d-622b1926c6b5' was resized: old size 131072, new size 204800 00:07:29.060 [2024-11-20 10:30:32.524880] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:29.320 [2024-11-20 10:30:32.620887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.320 [2024-11-20 10:30:32.668515] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:29.320 [2024-11-20 10:30:32.668627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:29.320 [2024-11-20 10:30:32.668644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.320 [2024-11-20 10:30:32.668664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:29.320 [2024-11-20 10:30:32.668788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.320 [2024-11-20 10:30:32.668826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.320 [2024-11-20 10:30:32.668840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.320 [2024-11-20 10:30:32.680389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:29.320 [2024-11-20 10:30:32.680462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.320 [2024-11-20 10:30:32.680488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:29.320 [2024-11-20 10:30:32.680501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.320 [2024-11-20 10:30:32.683014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.320 [2024-11-20 10:30:32.683147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:29.320 [2024-11-20 10:30:32.685214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 68089cea-77cc-4c98-99de-688d9db65905 00:07:29.320 [2024-11-20 10:30:32.685295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 68089cea-77cc-4c98-99de-688d9db65905 is claimed 00:07:29.320 [2024-11-20 10:30:32.685447] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 58238458-9581-4828-8c9d-622b1926c6b5 00:07:29.320 [2024-11-20 10:30:32.685473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 58238458-9581-4828-8c9d-622b1926c6b5 is claimed 00:07:29.320 [2024-11-20 10:30:32.685685] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 58238458-9581-4828-8c9d-622b1926c6b5 (2) smaller than existing raid bdev Raid (3) 00:07:29.320 [2024-11-20 10:30:32.685720] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 68089cea-77cc-4c98-99de-688d9db65905: File exists 00:07:29.320 [2024-11-20 10:30:32.685764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:29.320 [2024-11-20 10:30:32.685778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:29.320 pt0 00:07:29.320 [2024-11-20 10:30:32.686055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:29.320 [2024-11-20 10:30:32.686241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:29.320 [2024-11-20 10:30:32.686251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:29.320 [2024-11-20 10:30:32.686514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:29.320 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.321 [2024-11-20 10:30:32.709575] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60255 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60255 ']' 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60255 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60255 00:07:29.321 killing process with pid 60255 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60255' 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60255 00:07:29.321 [2024-11-20 10:30:32.794675] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.321 [2024-11-20 10:30:32.794771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.321 [2024-11-20 10:30:32.794823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.321 [2024-11-20 10:30:32.794833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:29.321 10:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60255 00:07:31.224 [2024-11-20 10:30:34.438455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.600 ************************************ 00:07:32.600 END TEST raid0_resize_superblock_test 00:07:32.600 ************************************ 00:07:32.600 10:30:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:32.600 00:07:32.600 real 0m5.021s 00:07:32.600 user 0m5.275s 00:07:32.600 sys 0m0.527s 00:07:32.600 10:30:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.600 10:30:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.600 10:30:35 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:32.600 10:30:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.600 10:30:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.600 10:30:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.600 ************************************ 00:07:32.600 START TEST raid1_resize_superblock_test 00:07:32.600 ************************************ 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60359 00:07:32.600 Process raid pid: 60359 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60359' 00:07:32.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60359 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60359 ']' 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.600 10:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.600 [2024-11-20 10:30:35.831927] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:32.600 [2024-11-20 10:30:35.832148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.600 [2024-11-20 10:30:36.012690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.859 [2024-11-20 10:30:36.133541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.116 [2024-11-20 10:30:36.350587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.116 [2024-11-20 10:30:36.350632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.375 10:30:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.375 10:30:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.375 10:30:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:33.375 10:30:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.375 10:30:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 malloc0 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 [2024-11-20 10:30:37.241942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:33.943 [2024-11-20 10:30:37.242009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.943 [2024-11-20 10:30:37.242029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:33.943 [2024-11-20 10:30:37.242041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.943 [2024-11-20 10:30:37.244209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.943 [2024-11-20 10:30:37.244254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:33.943 pt0 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 40afbe74-1034-4674-a009-1f07b78a205d 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 b07825ed-439e-46c2-9988-bc14c7452b2c 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 8e3c2366-82f3-4b62-a2ff-2679d4058e73 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 [2024-11-20 10:30:37.375263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b07825ed-439e-46c2-9988-bc14c7452b2c is claimed 00:07:33.943 [2024-11-20 10:30:37.375357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8e3c2366-82f3-4b62-a2ff-2679d4058e73 is claimed 00:07:33.943 [2024-11-20 10:30:37.375531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:33.943 [2024-11-20 10:30:37.375549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:33.943 [2024-11-20 10:30:37.375848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.943 [2024-11-20 10:30:37.376077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:33.943 [2024-11-20 10:30:37.376091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:33.943 [2024-11-20 10:30:37.376269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.943 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.202 [2024-11-20 10:30:37.487285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.202 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 [2024-11-20 10:30:37.515227] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.203 [2024-11-20 10:30:37.515263] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b07825ed-439e-46c2-9988-bc14c7452b2c' was resized: old size 131072, new size 204800 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 [2024-11-20 10:30:37.527113] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.203 [2024-11-20 10:30:37.527136] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8e3c2366-82f3-4b62-a2ff-2679d4058e73' was resized: old size 131072, new size 204800 00:07:34.203 [2024-11-20 10:30:37.527164] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 [2024-11-20 10:30:37.607174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 [2024-11-20 10:30:37.638839] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:34.203 [2024-11-20 10:30:37.638917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:34.203 [2024-11-20 10:30:37.638947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:34.203 [2024-11-20 10:30:37.639112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.203 [2024-11-20 10:30:37.639320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.203 [2024-11-20 10:30:37.639411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.203 [2024-11-20 10:30:37.639432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 [2024-11-20 10:30:37.650729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:34.203 [2024-11-20 10:30:37.650788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.203 [2024-11-20 10:30:37.650812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:34.203 [2024-11-20 10:30:37.650824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.203 [2024-11-20 10:30:37.653304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.203 [2024-11-20 10:30:37.653400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:34.203 [2024-11-20 10:30:37.655200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b07825ed-439e-46c2-9988-bc14c7452b2c 00:07:34.203 [2024-11-20 10:30:37.655280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b07825ed-439e-46c2-9988-bc14c7452b2c is claimed 00:07:34.203 [2024-11-20 10:30:37.655446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8e3c2366-82f3-4b62-a2ff-2679d4058e73 00:07:34.203 [2024-11-20 10:30:37.655469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8e3c2366-82f3-4b62-a2ff-2679d4058e73 is claimed 00:07:34.203 [2024-11-20 10:30:37.655655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8e3c2366-82f3-4b62-a2ff-2679d4058e73 (2) smaller than existing raid bdev Raid (3) 00:07:34.203 [2024-11-20 10:30:37.655684] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b07825ed-439e-46c2-9988-bc14c7452b2c: File exists 00:07:34.203 [2024-11-20 10:30:37.655725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:34.203 [2024-11-20 10:30:37.655738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:34.203 pt0 00:07:34.203 [2024-11-20 10:30:37.655994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:34.203 [2024-11-20 10:30:37.656173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:34.203 [2024-11-20 10:30:37.656189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:34.203 [2024-11-20 10:30:37.656367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.203 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.462 [2024-11-20 10:30:37.679534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60359 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60359 ']' 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60359 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60359 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.462 killing process with pid 60359 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60359' 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60359 00:07:34.462 [2024-11-20 10:30:37.776999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.462 [2024-11-20 10:30:37.777094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.462 [2024-11-20 10:30:37.777154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.462 10:30:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60359 00:07:34.462 [2024-11-20 10:30:37.777164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:36.361 [2024-11-20 10:30:39.348916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.296 10:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:37.296 00:07:37.296 real 0m4.817s 00:07:37.296 user 0m4.981s 00:07:37.296 sys 0m0.588s 00:07:37.296 10:30:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.296 10:30:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.296 ************************************ 00:07:37.296 END TEST raid1_resize_superblock_test 00:07:37.296 ************************************ 00:07:37.296 10:30:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:37.296 10:30:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:37.296 10:30:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:37.296 10:30:40 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:37.296 10:30:40 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:37.296 10:30:40 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:37.296 10:30:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.296 10:30:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.296 10:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.296 ************************************ 00:07:37.296 START TEST raid_function_test_raid0 00:07:37.296 ************************************ 00:07:37.296 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:37.296 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:37.296 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60456 00:07:37.297 Process raid pid: 60456 00:07:37.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60456' 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60456 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60456 ']' 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.297 10:30:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:37.297 [2024-11-20 10:30:40.730705] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:37.297 [2024-11-20 10:30:40.730830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.554 [2024-11-20 10:30:40.907419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.812 [2024-11-20 10:30:41.034053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.812 [2024-11-20 10:30:41.266883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.812 [2024-11-20 10:30:41.266926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:38.381 Base_1 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:38.381 Base_2 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:38.381 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:38.382 [2024-11-20 10:30:41.678678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:38.382 [2024-11-20 10:30:41.680935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:38.382 [2024-11-20 10:30:41.681080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:38.382 [2024-11-20 10:30:41.681130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:38.382 [2024-11-20 10:30:41.681503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:38.382 [2024-11-20 10:30:41.681724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:38.382 [2024-11-20 10:30:41.681770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:38.382 [2024-11-20 10:30:41.682023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:38.382 10:30:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:38.641 [2024-11-20 10:30:41.966255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:38.641 /dev/nbd0 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.641 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.642 1+0 records in 00:07:38.642 1+0 records out 00:07:38.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308506 s, 13.3 MB/s 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.642 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.901 { 00:07:38.901 "nbd_device": "/dev/nbd0", 00:07:38.901 "bdev_name": "raid" 00:07:38.901 } 00:07:38.901 ]' 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.901 { 00:07:38.901 "nbd_device": "/dev/nbd0", 00:07:38.901 "bdev_name": "raid" 00:07:38.901 } 00:07:38.901 ]' 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:38.901 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:39.160 4096+0 records in 00:07:39.160 4096+0 records out 00:07:39.160 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0320277 s, 65.5 MB/s 00:07:39.160 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:39.419 4096+0 records in 00:07:39.419 4096+0 records out 00:07:39.419 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.233688 s, 9.0 MB/s 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:39.419 128+0 records in 00:07:39.419 128+0 records out 00:07:39.419 65536 bytes (66 kB, 64 KiB) copied, 0.000646753 s, 101 MB/s 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:39.419 2035+0 records in 00:07:39.419 2035+0 records out 00:07:39.419 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0108371 s, 96.1 MB/s 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:39.419 456+0 records in 00:07:39.419 456+0 records out 00:07:39.419 233472 bytes (233 kB, 228 KiB) copied, 0.0040977 s, 57.0 MB/s 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:39.419 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:39.420 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.420 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:39.420 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.420 10:30:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:39.679 [2024-11-20 10:30:43.008544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:39.679 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60456 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60456 ']' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60456 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60456 00:07:39.939 killing process with pid 60456 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60456' 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60456 00:07:39.939 [2024-11-20 10:30:43.350114] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.939 [2024-11-20 10:30:43.350226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.939 10:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60456 00:07:39.939 [2024-11-20 10:30:43.350277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.939 [2024-11-20 10:30:43.350293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:40.197 [2024-11-20 10:30:43.576806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.580 10:30:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:41.580 00:07:41.580 real 0m4.076s 00:07:41.580 user 0m4.824s 00:07:41.580 sys 0m0.942s 00:07:41.580 10:30:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.580 10:30:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:41.580 ************************************ 00:07:41.580 END TEST raid_function_test_raid0 00:07:41.580 ************************************ 00:07:41.580 10:30:44 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:41.580 10:30:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.580 10:30:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.580 10:30:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.580 ************************************ 00:07:41.580 START TEST raid_function_test_concat 00:07:41.580 ************************************ 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:41.580 Process raid pid: 60585 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60585 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60585' 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.580 10:30:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60585 00:07:41.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.581 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60585 ']' 00:07:41.581 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.581 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.581 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.581 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.581 10:30:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:41.581 [2024-11-20 10:30:44.862214] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:41.581 [2024-11-20 10:30:44.862326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.581 [2024-11-20 10:30:45.021048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.837 [2024-11-20 10:30:45.146665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.138 [2024-11-20 10:30:45.366326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.139 [2024-11-20 10:30:45.366381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:42.396 Base_1 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:42.396 Base_2 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:42.396 [2024-11-20 10:30:45.783479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:42.396 [2024-11-20 10:30:45.785522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:42.396 [2024-11-20 10:30:45.785626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:42.396 [2024-11-20 10:30:45.785642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:42.396 [2024-11-20 10:30:45.785934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.396 [2024-11-20 10:30:45.786091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:42.396 [2024-11-20 10:30:45.786100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:42.396 [2024-11-20 10:30:45.786293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:42.396 10:30:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:42.653 [2024-11-20 10:30:46.051061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:42.653 /dev/nbd0 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.653 1+0 records in 00:07:42.653 1+0 records out 00:07:42.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391961 s, 10.5 MB/s 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:42.653 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:42.910 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.910 { 00:07:42.910 "nbd_device": "/dev/nbd0", 00:07:42.910 "bdev_name": "raid" 00:07:42.910 } 00:07:42.910 ]' 00:07:42.910 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.910 { 00:07:42.910 "nbd_device": "/dev/nbd0", 00:07:42.910 "bdev_name": "raid" 00:07:42.910 } 00:07:42.910 ]' 00:07:42.910 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:43.167 4096+0 records in 00:07:43.167 4096+0 records out 00:07:43.167 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0340215 s, 61.6 MB/s 00:07:43.167 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:43.425 4096+0 records in 00:07:43.425 4096+0 records out 00:07:43.425 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.228454 s, 9.2 MB/s 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:43.425 128+0 records in 00:07:43.425 128+0 records out 00:07:43.425 65536 bytes (66 kB, 64 KiB) copied, 0.00121691 s, 53.9 MB/s 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:43.425 2035+0 records in 00:07:43.425 2035+0 records out 00:07:43.425 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00707229 s, 147 MB/s 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:43.425 456+0 records in 00:07:43.425 456+0 records out 00:07:43.425 233472 bytes (233 kB, 228 KiB) copied, 0.00358155 s, 65.2 MB/s 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.425 10:30:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:43.682 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.682 [2024-11-20 10:30:47.037927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.683 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60585 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60585 ']' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60585 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60585 00:07:43.940 killing process with pid 60585 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60585' 00:07:43.940 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60585 00:07:43.940 [2024-11-20 10:30:47.404113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.941 [2024-11-20 10:30:47.404228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.941 10:30:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60585 00:07:43.941 [2024-11-20 10:30:47.404288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.941 [2024-11-20 10:30:47.404301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:44.198 [2024-11-20 10:30:47.621059] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.571 10:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:45.571 00:07:45.571 real 0m4.043s 00:07:45.571 user 0m4.746s 00:07:45.571 sys 0m0.958s 00:07:45.571 10:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.571 10:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:45.571 ************************************ 00:07:45.571 END TEST raid_function_test_concat 00:07:45.571 ************************************ 00:07:45.571 10:30:48 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:45.571 10:30:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.571 10:30:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.571 10:30:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.571 ************************************ 00:07:45.571 START TEST raid0_resize_test 00:07:45.571 ************************************ 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60714 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60714' 00:07:45.571 Process raid pid: 60714 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60714 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60714 ']' 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.571 10:30:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.571 [2024-11-20 10:30:48.974861] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:45.571 [2024-11-20 10:30:48.975081] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.828 [2024-11-20 10:30:49.155324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.828 [2024-11-20 10:30:49.282308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.085 [2024-11-20 10:30:49.508586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.085 [2024-11-20 10:30:49.508620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.652 Base_1 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.652 Base_2 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.652 [2024-11-20 10:30:49.876308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:46.652 [2024-11-20 10:30:49.878307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:46.652 [2024-11-20 10:30:49.878373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.652 [2024-11-20 10:30:49.878386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:46.652 [2024-11-20 10:30:49.878622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:46.652 [2024-11-20 10:30:49.878735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.652 [2024-11-20 10:30:49.878744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:46.652 [2024-11-20 10:30:49.878900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.652 [2024-11-20 10:30:49.888269] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:46.652 [2024-11-20 10:30:49.888344] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:46.652 true 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.652 [2024-11-20 10:30:49.904500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.652 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.653 [2024-11-20 10:30:49.948186] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:46.653 [2024-11-20 10:30:49.948265] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:46.653 [2024-11-20 10:30:49.948321] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:46.653 true 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.653 [2024-11-20 10:30:49.964386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60714 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60714 ']' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60714 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.653 10:30:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60714 00:07:46.653 10:30:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.653 10:30:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.653 10:30:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60714' 00:07:46.653 killing process with pid 60714 00:07:46.653 10:30:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60714 00:07:46.653 [2024-11-20 10:30:50.030713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.653 [2024-11-20 10:30:50.030875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.653 10:30:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60714 00:07:46.653 [2024-11-20 10:30:50.030975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.653 [2024-11-20 10:30:50.031031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:46.653 [2024-11-20 10:30:50.049864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.033 10:30:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:48.033 00:07:48.033 real 0m2.388s 00:07:48.033 user 0m2.557s 00:07:48.033 sys 0m0.340s 00:07:48.033 10:30:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.033 10:30:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 ************************************ 00:07:48.033 END TEST raid0_resize_test 00:07:48.033 ************************************ 00:07:48.033 10:30:51 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:48.033 10:30:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:48.033 10:30:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.033 10:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 ************************************ 00:07:48.033 START TEST raid1_resize_test 00:07:48.033 ************************************ 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60776 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60776' 00:07:48.033 Process raid pid: 60776 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60776 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60776 ']' 00:07:48.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.033 10:30:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 [2024-11-20 10:30:51.432925] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:48.033 [2024-11-20 10:30:51.433063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.292 [2024-11-20 10:30:51.617504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.293 [2024-11-20 10:30:51.744966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.551 [2024-11-20 10:30:51.967887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.551 [2024-11-20 10:30:51.968034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 Base_1 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 Base_2 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 [2024-11-20 10:30:52.352656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:49.118 [2024-11-20 10:30:52.354746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:49.118 [2024-11-20 10:30:52.354818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:49.118 [2024-11-20 10:30:52.354830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:49.118 [2024-11-20 10:30:52.355132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:49.118 [2024-11-20 10:30:52.355285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:49.118 [2024-11-20 10:30:52.355296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:49.118 [2024-11-20 10:30:52.355638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 [2024-11-20 10:30:52.364590] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:49.118 [2024-11-20 10:30:52.364668] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:49.118 true 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 [2024-11-20 10:30:52.380777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 [2024-11-20 10:30:52.428525] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:49.118 [2024-11-20 10:30:52.428626] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:49.118 [2024-11-20 10:30:52.428710] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:49.118 true 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 [2024-11-20 10:30:52.444691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60776 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60776 ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60776 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60776 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60776' 00:07:49.118 killing process with pid 60776 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60776 00:07:49.118 [2024-11-20 10:30:52.529119] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.118 [2024-11-20 10:30:52.529302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.118 10:30:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60776 00:07:49.118 [2024-11-20 10:30:52.529875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.118 [2024-11-20 10:30:52.529955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:49.118 [2024-11-20 10:30:52.548116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.494 10:30:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:50.494 00:07:50.494 real 0m2.417s 00:07:50.494 user 0m2.627s 00:07:50.494 sys 0m0.350s 00:07:50.494 10:30:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.494 10:30:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.494 ************************************ 00:07:50.494 END TEST raid1_resize_test 00:07:50.494 ************************************ 00:07:50.494 10:30:53 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:50.494 10:30:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:50.494 10:30:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:50.494 10:30:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.494 10:30:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.494 10:30:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.494 ************************************ 00:07:50.494 START TEST raid_state_function_test 00:07:50.494 ************************************ 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:50.494 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:50.495 Process raid pid: 60838 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60838 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60838' 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60838 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60838 ']' 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.495 10:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.495 [2024-11-20 10:30:53.936574] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:50.495 [2024-11-20 10:30:53.936835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.753 [2024-11-20 10:30:54.115399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.013 [2024-11-20 10:30:54.241754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.013 [2024-11-20 10:30:54.481340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.013 [2024-11-20 10:30:54.481487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.579 [2024-11-20 10:30:54.825381] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.579 [2024-11-20 10:30:54.825437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.579 [2024-11-20 10:30:54.825449] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.579 [2024-11-20 10:30:54.825460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.579 "name": "Existed_Raid", 00:07:51.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.579 "strip_size_kb": 64, 00:07:51.579 "state": "configuring", 00:07:51.579 "raid_level": "raid0", 00:07:51.579 "superblock": false, 00:07:51.579 "num_base_bdevs": 2, 00:07:51.579 "num_base_bdevs_discovered": 0, 00:07:51.579 "num_base_bdevs_operational": 2, 00:07:51.579 "base_bdevs_list": [ 00:07:51.579 { 00:07:51.579 "name": "BaseBdev1", 00:07:51.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.579 "is_configured": false, 00:07:51.579 "data_offset": 0, 00:07:51.579 "data_size": 0 00:07:51.579 }, 00:07:51.579 { 00:07:51.579 "name": "BaseBdev2", 00:07:51.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.579 "is_configured": false, 00:07:51.579 "data_offset": 0, 00:07:51.579 "data_size": 0 00:07:51.579 } 00:07:51.579 ] 00:07:51.579 }' 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.579 10:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.837 [2024-11-20 10:30:55.304535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.837 [2024-11-20 10:30:55.304630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.837 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.096 [2024-11-20 10:30:55.316503] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.096 [2024-11-20 10:30:55.316598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.096 [2024-11-20 10:30:55.316632] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.096 [2024-11-20 10:30:55.316664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.096 [2024-11-20 10:30:55.371967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.096 BaseBdev1 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.096 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.096 [ 00:07:52.096 { 00:07:52.096 "name": "BaseBdev1", 00:07:52.096 "aliases": [ 00:07:52.096 "946f1c93-f53f-4b14-85a9-2e959a2d41d3" 00:07:52.096 ], 00:07:52.096 "product_name": "Malloc disk", 00:07:52.096 "block_size": 512, 00:07:52.096 "num_blocks": 65536, 00:07:52.096 "uuid": "946f1c93-f53f-4b14-85a9-2e959a2d41d3", 00:07:52.096 "assigned_rate_limits": { 00:07:52.096 "rw_ios_per_sec": 0, 00:07:52.096 "rw_mbytes_per_sec": 0, 00:07:52.096 "r_mbytes_per_sec": 0, 00:07:52.096 "w_mbytes_per_sec": 0 00:07:52.096 }, 00:07:52.096 "claimed": true, 00:07:52.097 "claim_type": "exclusive_write", 00:07:52.097 "zoned": false, 00:07:52.097 "supported_io_types": { 00:07:52.097 "read": true, 00:07:52.097 "write": true, 00:07:52.097 "unmap": true, 00:07:52.097 "flush": true, 00:07:52.097 "reset": true, 00:07:52.097 "nvme_admin": false, 00:07:52.097 "nvme_io": false, 00:07:52.097 "nvme_io_md": false, 00:07:52.097 "write_zeroes": true, 00:07:52.097 "zcopy": true, 00:07:52.097 "get_zone_info": false, 00:07:52.097 "zone_management": false, 00:07:52.097 "zone_append": false, 00:07:52.097 "compare": false, 00:07:52.097 "compare_and_write": false, 00:07:52.097 "abort": true, 00:07:52.097 "seek_hole": false, 00:07:52.097 "seek_data": false, 00:07:52.097 "copy": true, 00:07:52.097 "nvme_iov_md": false 00:07:52.097 }, 00:07:52.097 "memory_domains": [ 00:07:52.097 { 00:07:52.097 "dma_device_id": "system", 00:07:52.097 "dma_device_type": 1 00:07:52.097 }, 00:07:52.097 { 00:07:52.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.097 "dma_device_type": 2 00:07:52.097 } 00:07:52.097 ], 00:07:52.097 "driver_specific": {} 00:07:52.097 } 00:07:52.097 ] 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.097 "name": "Existed_Raid", 00:07:52.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.097 "strip_size_kb": 64, 00:07:52.097 "state": "configuring", 00:07:52.097 "raid_level": "raid0", 00:07:52.097 "superblock": false, 00:07:52.097 "num_base_bdevs": 2, 00:07:52.097 "num_base_bdevs_discovered": 1, 00:07:52.097 "num_base_bdevs_operational": 2, 00:07:52.097 "base_bdevs_list": [ 00:07:52.097 { 00:07:52.097 "name": "BaseBdev1", 00:07:52.097 "uuid": "946f1c93-f53f-4b14-85a9-2e959a2d41d3", 00:07:52.097 "is_configured": true, 00:07:52.097 "data_offset": 0, 00:07:52.097 "data_size": 65536 00:07:52.097 }, 00:07:52.097 { 00:07:52.097 "name": "BaseBdev2", 00:07:52.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.097 "is_configured": false, 00:07:52.097 "data_offset": 0, 00:07:52.097 "data_size": 0 00:07:52.097 } 00:07:52.097 ] 00:07:52.097 }' 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.097 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.403 [2024-11-20 10:30:55.843237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.403 [2024-11-20 10:30:55.843297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.403 [2024-11-20 10:30:55.855243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.403 [2024-11-20 10:30:55.857368] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.403 [2024-11-20 10:30:55.857438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.403 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.662 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.662 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.662 "name": "Existed_Raid", 00:07:52.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.662 "strip_size_kb": 64, 00:07:52.662 "state": "configuring", 00:07:52.662 "raid_level": "raid0", 00:07:52.662 "superblock": false, 00:07:52.662 "num_base_bdevs": 2, 00:07:52.662 "num_base_bdevs_discovered": 1, 00:07:52.662 "num_base_bdevs_operational": 2, 00:07:52.662 "base_bdevs_list": [ 00:07:52.662 { 00:07:52.662 "name": "BaseBdev1", 00:07:52.662 "uuid": "946f1c93-f53f-4b14-85a9-2e959a2d41d3", 00:07:52.662 "is_configured": true, 00:07:52.662 "data_offset": 0, 00:07:52.662 "data_size": 65536 00:07:52.662 }, 00:07:52.662 { 00:07:52.662 "name": "BaseBdev2", 00:07:52.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.662 "is_configured": false, 00:07:52.662 "data_offset": 0, 00:07:52.662 "data_size": 0 00:07:52.662 } 00:07:52.662 ] 00:07:52.662 }' 00:07:52.662 10:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.662 10:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.924 [2024-11-20 10:30:56.309731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.924 [2024-11-20 10:30:56.309867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.924 [2024-11-20 10:30:56.309898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:52.924 [2024-11-20 10:30:56.310307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.924 [2024-11-20 10:30:56.310551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.924 [2024-11-20 10:30:56.310608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.924 [2024-11-20 10:30:56.310965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.924 BaseBdev2 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.924 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.924 [ 00:07:52.924 { 00:07:52.924 "name": "BaseBdev2", 00:07:52.924 "aliases": [ 00:07:52.924 "33b77cfe-4092-4c90-83e6-ce585ed3eddb" 00:07:52.924 ], 00:07:52.924 "product_name": "Malloc disk", 00:07:52.924 "block_size": 512, 00:07:52.924 "num_blocks": 65536, 00:07:52.924 "uuid": "33b77cfe-4092-4c90-83e6-ce585ed3eddb", 00:07:52.924 "assigned_rate_limits": { 00:07:52.924 "rw_ios_per_sec": 0, 00:07:52.924 "rw_mbytes_per_sec": 0, 00:07:52.924 "r_mbytes_per_sec": 0, 00:07:52.924 "w_mbytes_per_sec": 0 00:07:52.924 }, 00:07:52.924 "claimed": true, 00:07:52.924 "claim_type": "exclusive_write", 00:07:52.924 "zoned": false, 00:07:52.924 "supported_io_types": { 00:07:52.924 "read": true, 00:07:52.924 "write": true, 00:07:52.924 "unmap": true, 00:07:52.924 "flush": true, 00:07:52.924 "reset": true, 00:07:52.924 "nvme_admin": false, 00:07:52.924 "nvme_io": false, 00:07:52.924 "nvme_io_md": false, 00:07:52.924 "write_zeroes": true, 00:07:52.924 "zcopy": true, 00:07:52.924 "get_zone_info": false, 00:07:52.924 "zone_management": false, 00:07:52.924 "zone_append": false, 00:07:52.925 "compare": false, 00:07:52.925 "compare_and_write": false, 00:07:52.925 "abort": true, 00:07:52.925 "seek_hole": false, 00:07:52.925 "seek_data": false, 00:07:52.925 "copy": true, 00:07:52.925 "nvme_iov_md": false 00:07:52.925 }, 00:07:52.925 "memory_domains": [ 00:07:52.925 { 00:07:52.925 "dma_device_id": "system", 00:07:52.925 "dma_device_type": 1 00:07:52.925 }, 00:07:52.925 { 00:07:52.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.925 "dma_device_type": 2 00:07:52.925 } 00:07:52.925 ], 00:07:52.925 "driver_specific": {} 00:07:52.925 } 00:07:52.925 ] 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.925 "name": "Existed_Raid", 00:07:52.925 "uuid": "3c781034-e9f0-4502-99a4-a5b89302112c", 00:07:52.925 "strip_size_kb": 64, 00:07:52.925 "state": "online", 00:07:52.925 "raid_level": "raid0", 00:07:52.925 "superblock": false, 00:07:52.925 "num_base_bdevs": 2, 00:07:52.925 "num_base_bdevs_discovered": 2, 00:07:52.925 "num_base_bdevs_operational": 2, 00:07:52.925 "base_bdevs_list": [ 00:07:52.925 { 00:07:52.925 "name": "BaseBdev1", 00:07:52.925 "uuid": "946f1c93-f53f-4b14-85a9-2e959a2d41d3", 00:07:52.925 "is_configured": true, 00:07:52.925 "data_offset": 0, 00:07:52.925 "data_size": 65536 00:07:52.925 }, 00:07:52.925 { 00:07:52.925 "name": "BaseBdev2", 00:07:52.925 "uuid": "33b77cfe-4092-4c90-83e6-ce585ed3eddb", 00:07:52.925 "is_configured": true, 00:07:52.925 "data_offset": 0, 00:07:52.925 "data_size": 65536 00:07:52.925 } 00:07:52.925 ] 00:07:52.925 }' 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.925 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.493 [2024-11-20 10:30:56.837254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.493 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.493 "name": "Existed_Raid", 00:07:53.493 "aliases": [ 00:07:53.493 "3c781034-e9f0-4502-99a4-a5b89302112c" 00:07:53.493 ], 00:07:53.493 "product_name": "Raid Volume", 00:07:53.493 "block_size": 512, 00:07:53.493 "num_blocks": 131072, 00:07:53.493 "uuid": "3c781034-e9f0-4502-99a4-a5b89302112c", 00:07:53.493 "assigned_rate_limits": { 00:07:53.493 "rw_ios_per_sec": 0, 00:07:53.493 "rw_mbytes_per_sec": 0, 00:07:53.493 "r_mbytes_per_sec": 0, 00:07:53.493 "w_mbytes_per_sec": 0 00:07:53.493 }, 00:07:53.493 "claimed": false, 00:07:53.493 "zoned": false, 00:07:53.493 "supported_io_types": { 00:07:53.493 "read": true, 00:07:53.493 "write": true, 00:07:53.493 "unmap": true, 00:07:53.493 "flush": true, 00:07:53.493 "reset": true, 00:07:53.493 "nvme_admin": false, 00:07:53.493 "nvme_io": false, 00:07:53.493 "nvme_io_md": false, 00:07:53.493 "write_zeroes": true, 00:07:53.493 "zcopy": false, 00:07:53.493 "get_zone_info": false, 00:07:53.493 "zone_management": false, 00:07:53.493 "zone_append": false, 00:07:53.493 "compare": false, 00:07:53.493 "compare_and_write": false, 00:07:53.493 "abort": false, 00:07:53.493 "seek_hole": false, 00:07:53.493 "seek_data": false, 00:07:53.493 "copy": false, 00:07:53.493 "nvme_iov_md": false 00:07:53.493 }, 00:07:53.493 "memory_domains": [ 00:07:53.493 { 00:07:53.493 "dma_device_id": "system", 00:07:53.493 "dma_device_type": 1 00:07:53.493 }, 00:07:53.493 { 00:07:53.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.493 "dma_device_type": 2 00:07:53.493 }, 00:07:53.493 { 00:07:53.493 "dma_device_id": "system", 00:07:53.493 "dma_device_type": 1 00:07:53.493 }, 00:07:53.493 { 00:07:53.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.493 "dma_device_type": 2 00:07:53.493 } 00:07:53.493 ], 00:07:53.493 "driver_specific": { 00:07:53.493 "raid": { 00:07:53.493 "uuid": "3c781034-e9f0-4502-99a4-a5b89302112c", 00:07:53.493 "strip_size_kb": 64, 00:07:53.493 "state": "online", 00:07:53.494 "raid_level": "raid0", 00:07:53.494 "superblock": false, 00:07:53.494 "num_base_bdevs": 2, 00:07:53.494 "num_base_bdevs_discovered": 2, 00:07:53.494 "num_base_bdevs_operational": 2, 00:07:53.494 "base_bdevs_list": [ 00:07:53.494 { 00:07:53.494 "name": "BaseBdev1", 00:07:53.494 "uuid": "946f1c93-f53f-4b14-85a9-2e959a2d41d3", 00:07:53.494 "is_configured": true, 00:07:53.494 "data_offset": 0, 00:07:53.494 "data_size": 65536 00:07:53.494 }, 00:07:53.494 { 00:07:53.494 "name": "BaseBdev2", 00:07:53.494 "uuid": "33b77cfe-4092-4c90-83e6-ce585ed3eddb", 00:07:53.494 "is_configured": true, 00:07:53.494 "data_offset": 0, 00:07:53.494 "data_size": 65536 00:07:53.494 } 00:07:53.494 ] 00:07:53.494 } 00:07:53.494 } 00:07:53.494 }' 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.494 BaseBdev2' 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.494 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 10:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.752 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.752 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.753 10:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.753 [2024-11-20 10:30:57.048602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.753 [2024-11-20 10:30:57.048639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.753 [2024-11-20 10:30:57.048691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.753 "name": "Existed_Raid", 00:07:53.753 "uuid": "3c781034-e9f0-4502-99a4-a5b89302112c", 00:07:53.753 "strip_size_kb": 64, 00:07:53.753 "state": "offline", 00:07:53.753 "raid_level": "raid0", 00:07:53.753 "superblock": false, 00:07:53.753 "num_base_bdevs": 2, 00:07:53.753 "num_base_bdevs_discovered": 1, 00:07:53.753 "num_base_bdevs_operational": 1, 00:07:53.753 "base_bdevs_list": [ 00:07:53.753 { 00:07:53.753 "name": null, 00:07:53.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.753 "is_configured": false, 00:07:53.753 "data_offset": 0, 00:07:53.753 "data_size": 65536 00:07:53.753 }, 00:07:53.753 { 00:07:53.753 "name": "BaseBdev2", 00:07:53.753 "uuid": "33b77cfe-4092-4c90-83e6-ce585ed3eddb", 00:07:53.753 "is_configured": true, 00:07:53.753 "data_offset": 0, 00:07:53.753 "data_size": 65536 00:07:53.753 } 00:07:53.753 ] 00:07:53.753 }' 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.753 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.320 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 [2024-11-20 10:30:57.678288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.320 [2024-11-20 10:30:57.678430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60838 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60838 ']' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60838 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60838 00:07:54.579 killing process with pid 60838 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60838' 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60838 00:07:54.579 10:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60838 00:07:54.579 [2024-11-20 10:30:57.880247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.579 [2024-11-20 10:30:57.901819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.958 ************************************ 00:07:55.958 END TEST raid_state_function_test 00:07:55.958 ************************************ 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.958 00:07:55.958 real 0m5.401s 00:07:55.958 user 0m7.695s 00:07:55.958 sys 0m0.859s 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.958 10:30:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:55.958 10:30:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.958 10:30:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.958 10:30:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.958 ************************************ 00:07:55.958 START TEST raid_state_function_test_sb 00:07:55.958 ************************************ 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:55.958 Process raid pid: 61097 00:07:55.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61097 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61097' 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61097 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61097 ']' 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.958 10:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.958 [2024-11-20 10:30:59.370320] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:55.958 [2024-11-20 10:30:59.370545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.216 [2024-11-20 10:30:59.551594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.216 [2024-11-20 10:30:59.687922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.474 [2024-11-20 10:30:59.926027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.474 [2024-11-20 10:30:59.926176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.040 [2024-11-20 10:31:00.249345] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.040 [2024-11-20 10:31:00.249470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.040 [2024-11-20 10:31:00.249556] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.040 [2024-11-20 10:31:00.249595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.040 "name": "Existed_Raid", 00:07:57.040 "uuid": "12223643-a722-419f-87cd-a869b40f1e94", 00:07:57.040 "strip_size_kb": 64, 00:07:57.040 "state": "configuring", 00:07:57.040 "raid_level": "raid0", 00:07:57.040 "superblock": true, 00:07:57.040 "num_base_bdevs": 2, 00:07:57.040 "num_base_bdevs_discovered": 0, 00:07:57.040 "num_base_bdevs_operational": 2, 00:07:57.040 "base_bdevs_list": [ 00:07:57.040 { 00:07:57.040 "name": "BaseBdev1", 00:07:57.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.040 "is_configured": false, 00:07:57.040 "data_offset": 0, 00:07:57.040 "data_size": 0 00:07:57.040 }, 00:07:57.040 { 00:07:57.040 "name": "BaseBdev2", 00:07:57.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.040 "is_configured": false, 00:07:57.040 "data_offset": 0, 00:07:57.040 "data_size": 0 00:07:57.040 } 00:07:57.040 ] 00:07:57.040 }' 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.040 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.318 [2024-11-20 10:31:00.644632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.318 [2024-11-20 10:31:00.644673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.318 [2024-11-20 10:31:00.652613] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.318 [2024-11-20 10:31:00.652673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.318 [2024-11-20 10:31:00.652686] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.318 [2024-11-20 10:31:00.652700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.318 BaseBdev1 00:07:57.318 [2024-11-20 10:31:00.704213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.318 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.318 [ 00:07:57.318 { 00:07:57.318 "name": "BaseBdev1", 00:07:57.318 "aliases": [ 00:07:57.318 "b2253f11-7939-4bb4-8210-cb8b0e92f627" 00:07:57.318 ], 00:07:57.318 "product_name": "Malloc disk", 00:07:57.318 "block_size": 512, 00:07:57.318 "num_blocks": 65536, 00:07:57.318 "uuid": "b2253f11-7939-4bb4-8210-cb8b0e92f627", 00:07:57.318 "assigned_rate_limits": { 00:07:57.318 "rw_ios_per_sec": 0, 00:07:57.318 "rw_mbytes_per_sec": 0, 00:07:57.318 "r_mbytes_per_sec": 0, 00:07:57.318 "w_mbytes_per_sec": 0 00:07:57.318 }, 00:07:57.318 "claimed": true, 00:07:57.318 "claim_type": "exclusive_write", 00:07:57.318 "zoned": false, 00:07:57.318 "supported_io_types": { 00:07:57.318 "read": true, 00:07:57.318 "write": true, 00:07:57.318 "unmap": true, 00:07:57.318 "flush": true, 00:07:57.318 "reset": true, 00:07:57.318 "nvme_admin": false, 00:07:57.318 "nvme_io": false, 00:07:57.318 "nvme_io_md": false, 00:07:57.318 "write_zeroes": true, 00:07:57.318 "zcopy": true, 00:07:57.318 "get_zone_info": false, 00:07:57.318 "zone_management": false, 00:07:57.318 "zone_append": false, 00:07:57.318 "compare": false, 00:07:57.318 "compare_and_write": false, 00:07:57.318 "abort": true, 00:07:57.318 "seek_hole": false, 00:07:57.318 "seek_data": false, 00:07:57.318 "copy": true, 00:07:57.318 "nvme_iov_md": false 00:07:57.318 }, 00:07:57.318 "memory_domains": [ 00:07:57.318 { 00:07:57.318 "dma_device_id": "system", 00:07:57.319 "dma_device_type": 1 00:07:57.319 }, 00:07:57.319 { 00:07:57.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.319 "dma_device_type": 2 00:07:57.319 } 00:07:57.319 ], 00:07:57.319 "driver_specific": {} 00:07:57.319 } 00:07:57.319 ] 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.319 "name": "Existed_Raid", 00:07:57.319 "uuid": "bfb50843-195d-49e5-9539-f09c147ef3d4", 00:07:57.319 "strip_size_kb": 64, 00:07:57.319 "state": "configuring", 00:07:57.319 "raid_level": "raid0", 00:07:57.319 "superblock": true, 00:07:57.319 "num_base_bdevs": 2, 00:07:57.319 "num_base_bdevs_discovered": 1, 00:07:57.319 "num_base_bdevs_operational": 2, 00:07:57.319 "base_bdevs_list": [ 00:07:57.319 { 00:07:57.319 "name": "BaseBdev1", 00:07:57.319 "uuid": "b2253f11-7939-4bb4-8210-cb8b0e92f627", 00:07:57.319 "is_configured": true, 00:07:57.319 "data_offset": 2048, 00:07:57.319 "data_size": 63488 00:07:57.319 }, 00:07:57.319 { 00:07:57.319 "name": "BaseBdev2", 00:07:57.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.319 "is_configured": false, 00:07:57.319 "data_offset": 0, 00:07:57.319 "data_size": 0 00:07:57.319 } 00:07:57.319 ] 00:07:57.319 }' 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.319 10:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.905 [2024-11-20 10:31:01.191523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.905 [2024-11-20 10:31:01.191654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.905 [2024-11-20 10:31:01.203550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.905 [2024-11-20 10:31:01.205728] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.905 [2024-11-20 10:31:01.205815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.905 "name": "Existed_Raid", 00:07:57.905 "uuid": "201d37c6-f3ad-46a0-99c9-0ba07c7b9eaf", 00:07:57.905 "strip_size_kb": 64, 00:07:57.905 "state": "configuring", 00:07:57.905 "raid_level": "raid0", 00:07:57.905 "superblock": true, 00:07:57.905 "num_base_bdevs": 2, 00:07:57.905 "num_base_bdevs_discovered": 1, 00:07:57.905 "num_base_bdevs_operational": 2, 00:07:57.905 "base_bdevs_list": [ 00:07:57.905 { 00:07:57.905 "name": "BaseBdev1", 00:07:57.905 "uuid": "b2253f11-7939-4bb4-8210-cb8b0e92f627", 00:07:57.905 "is_configured": true, 00:07:57.905 "data_offset": 2048, 00:07:57.905 "data_size": 63488 00:07:57.905 }, 00:07:57.905 { 00:07:57.905 "name": "BaseBdev2", 00:07:57.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.905 "is_configured": false, 00:07:57.905 "data_offset": 0, 00:07:57.905 "data_size": 0 00:07:57.905 } 00:07:57.905 ] 00:07:57.905 }' 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.905 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 [2024-11-20 10:31:01.720926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.475 [2024-11-20 10:31:01.721209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.475 [2024-11-20 10:31:01.721227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.475 [2024-11-20 10:31:01.721583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.475 BaseBdev2 00:07:58.475 [2024-11-20 10:31:01.721784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.475 [2024-11-20 10:31:01.721801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.475 [2024-11-20 10:31:01.721976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 [ 00:07:58.475 { 00:07:58.475 "name": "BaseBdev2", 00:07:58.475 "aliases": [ 00:07:58.475 "f64660c5-38a4-4dc1-987b-fb612a8fb14d" 00:07:58.475 ], 00:07:58.475 "product_name": "Malloc disk", 00:07:58.475 "block_size": 512, 00:07:58.475 "num_blocks": 65536, 00:07:58.475 "uuid": "f64660c5-38a4-4dc1-987b-fb612a8fb14d", 00:07:58.475 "assigned_rate_limits": { 00:07:58.475 "rw_ios_per_sec": 0, 00:07:58.475 "rw_mbytes_per_sec": 0, 00:07:58.475 "r_mbytes_per_sec": 0, 00:07:58.475 "w_mbytes_per_sec": 0 00:07:58.475 }, 00:07:58.475 "claimed": true, 00:07:58.475 "claim_type": "exclusive_write", 00:07:58.475 "zoned": false, 00:07:58.475 "supported_io_types": { 00:07:58.475 "read": true, 00:07:58.475 "write": true, 00:07:58.475 "unmap": true, 00:07:58.475 "flush": true, 00:07:58.475 "reset": true, 00:07:58.475 "nvme_admin": false, 00:07:58.475 "nvme_io": false, 00:07:58.475 "nvme_io_md": false, 00:07:58.475 "write_zeroes": true, 00:07:58.475 "zcopy": true, 00:07:58.475 "get_zone_info": false, 00:07:58.475 "zone_management": false, 00:07:58.475 "zone_append": false, 00:07:58.475 "compare": false, 00:07:58.475 "compare_and_write": false, 00:07:58.475 "abort": true, 00:07:58.475 "seek_hole": false, 00:07:58.475 "seek_data": false, 00:07:58.475 "copy": true, 00:07:58.475 "nvme_iov_md": false 00:07:58.475 }, 00:07:58.475 "memory_domains": [ 00:07:58.475 { 00:07:58.475 "dma_device_id": "system", 00:07:58.475 "dma_device_type": 1 00:07:58.475 }, 00:07:58.475 { 00:07:58.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.475 "dma_device_type": 2 00:07:58.475 } 00:07:58.475 ], 00:07:58.475 "driver_specific": {} 00:07:58.475 } 00:07:58.475 ] 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.475 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.475 "name": "Existed_Raid", 00:07:58.475 "uuid": "201d37c6-f3ad-46a0-99c9-0ba07c7b9eaf", 00:07:58.475 "strip_size_kb": 64, 00:07:58.475 "state": "online", 00:07:58.475 "raid_level": "raid0", 00:07:58.475 "superblock": true, 00:07:58.475 "num_base_bdevs": 2, 00:07:58.475 "num_base_bdevs_discovered": 2, 00:07:58.475 "num_base_bdevs_operational": 2, 00:07:58.475 "base_bdevs_list": [ 00:07:58.475 { 00:07:58.475 "name": "BaseBdev1", 00:07:58.475 "uuid": "b2253f11-7939-4bb4-8210-cb8b0e92f627", 00:07:58.475 "is_configured": true, 00:07:58.475 "data_offset": 2048, 00:07:58.475 "data_size": 63488 00:07:58.475 }, 00:07:58.475 { 00:07:58.475 "name": "BaseBdev2", 00:07:58.476 "uuid": "f64660c5-38a4-4dc1-987b-fb612a8fb14d", 00:07:58.476 "is_configured": true, 00:07:58.476 "data_offset": 2048, 00:07:58.476 "data_size": 63488 00:07:58.476 } 00:07:58.476 ] 00:07:58.476 }' 00:07:58.476 10:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.476 10:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.735 [2024-11-20 10:31:02.156694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.735 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.735 "name": "Existed_Raid", 00:07:58.735 "aliases": [ 00:07:58.735 "201d37c6-f3ad-46a0-99c9-0ba07c7b9eaf" 00:07:58.735 ], 00:07:58.735 "product_name": "Raid Volume", 00:07:58.735 "block_size": 512, 00:07:58.735 "num_blocks": 126976, 00:07:58.735 "uuid": "201d37c6-f3ad-46a0-99c9-0ba07c7b9eaf", 00:07:58.735 "assigned_rate_limits": { 00:07:58.735 "rw_ios_per_sec": 0, 00:07:58.735 "rw_mbytes_per_sec": 0, 00:07:58.735 "r_mbytes_per_sec": 0, 00:07:58.735 "w_mbytes_per_sec": 0 00:07:58.736 }, 00:07:58.736 "claimed": false, 00:07:58.736 "zoned": false, 00:07:58.736 "supported_io_types": { 00:07:58.736 "read": true, 00:07:58.736 "write": true, 00:07:58.736 "unmap": true, 00:07:58.736 "flush": true, 00:07:58.736 "reset": true, 00:07:58.736 "nvme_admin": false, 00:07:58.736 "nvme_io": false, 00:07:58.736 "nvme_io_md": false, 00:07:58.736 "write_zeroes": true, 00:07:58.736 "zcopy": false, 00:07:58.736 "get_zone_info": false, 00:07:58.736 "zone_management": false, 00:07:58.736 "zone_append": false, 00:07:58.736 "compare": false, 00:07:58.736 "compare_and_write": false, 00:07:58.736 "abort": false, 00:07:58.736 "seek_hole": false, 00:07:58.736 "seek_data": false, 00:07:58.736 "copy": false, 00:07:58.736 "nvme_iov_md": false 00:07:58.736 }, 00:07:58.736 "memory_domains": [ 00:07:58.736 { 00:07:58.736 "dma_device_id": "system", 00:07:58.736 "dma_device_type": 1 00:07:58.736 }, 00:07:58.736 { 00:07:58.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.736 "dma_device_type": 2 00:07:58.736 }, 00:07:58.736 { 00:07:58.736 "dma_device_id": "system", 00:07:58.736 "dma_device_type": 1 00:07:58.736 }, 00:07:58.736 { 00:07:58.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.736 "dma_device_type": 2 00:07:58.736 } 00:07:58.736 ], 00:07:58.736 "driver_specific": { 00:07:58.736 "raid": { 00:07:58.736 "uuid": "201d37c6-f3ad-46a0-99c9-0ba07c7b9eaf", 00:07:58.736 "strip_size_kb": 64, 00:07:58.736 "state": "online", 00:07:58.736 "raid_level": "raid0", 00:07:58.736 "superblock": true, 00:07:58.736 "num_base_bdevs": 2, 00:07:58.736 "num_base_bdevs_discovered": 2, 00:07:58.736 "num_base_bdevs_operational": 2, 00:07:58.736 "base_bdevs_list": [ 00:07:58.736 { 00:07:58.736 "name": "BaseBdev1", 00:07:58.736 "uuid": "b2253f11-7939-4bb4-8210-cb8b0e92f627", 00:07:58.736 "is_configured": true, 00:07:58.736 "data_offset": 2048, 00:07:58.736 "data_size": 63488 00:07:58.736 }, 00:07:58.736 { 00:07:58.736 "name": "BaseBdev2", 00:07:58.736 "uuid": "f64660c5-38a4-4dc1-987b-fb612a8fb14d", 00:07:58.736 "is_configured": true, 00:07:58.736 "data_offset": 2048, 00:07:58.736 "data_size": 63488 00:07:58.736 } 00:07:58.736 ] 00:07:58.736 } 00:07:58.736 } 00:07:58.736 }' 00:07:58.736 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:58.996 BaseBdev2' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.996 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 [2024-11-20 10:31:02.363951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:58.996 [2024-11-20 10:31:02.364039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.996 [2024-11-20 10:31:02.364123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.256 "name": "Existed_Raid", 00:07:59.256 "uuid": "201d37c6-f3ad-46a0-99c9-0ba07c7b9eaf", 00:07:59.256 "strip_size_kb": 64, 00:07:59.256 "state": "offline", 00:07:59.256 "raid_level": "raid0", 00:07:59.256 "superblock": true, 00:07:59.256 "num_base_bdevs": 2, 00:07:59.256 "num_base_bdevs_discovered": 1, 00:07:59.256 "num_base_bdevs_operational": 1, 00:07:59.256 "base_bdevs_list": [ 00:07:59.256 { 00:07:59.256 "name": null, 00:07:59.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.256 "is_configured": false, 00:07:59.256 "data_offset": 0, 00:07:59.256 "data_size": 63488 00:07:59.256 }, 00:07:59.256 { 00:07:59.256 "name": "BaseBdev2", 00:07:59.256 "uuid": "f64660c5-38a4-4dc1-987b-fb612a8fb14d", 00:07:59.256 "is_configured": true, 00:07:59.256 "data_offset": 2048, 00:07:59.256 "data_size": 63488 00:07:59.256 } 00:07:59.256 ] 00:07:59.256 }' 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.256 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.514 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.514 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.515 10:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.515 [2024-11-20 10:31:02.985596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.515 [2024-11-20 10:31:02.985717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.773 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61097 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61097 ']' 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61097 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61097 00:07:59.774 killing process with pid 61097 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61097' 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61097 00:07:59.774 10:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61097 00:07:59.774 [2024-11-20 10:31:03.185535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.774 [2024-11-20 10:31:03.204981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.152 ************************************ 00:08:01.152 END TEST raid_state_function_test_sb 00:08:01.152 ************************************ 00:08:01.152 10:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:01.152 00:08:01.152 real 0m5.241s 00:08:01.152 user 0m7.456s 00:08:01.152 sys 0m0.800s 00:08:01.152 10:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.152 10:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.152 10:31:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:01.152 10:31:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:01.152 10:31:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.152 10:31:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.152 ************************************ 00:08:01.152 START TEST raid_superblock_test 00:08:01.152 ************************************ 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61349 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61349 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61349 ']' 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.152 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.410 [2024-11-20 10:31:04.672603] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:01.410 [2024-11-20 10:31:04.672814] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61349 ] 00:08:01.410 [2024-11-20 10:31:04.848208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.669 [2024-11-20 10:31:04.978455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.928 [2024-11-20 10:31:05.187424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.928 [2024-11-20 10:31:05.187562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 malloc1 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 [2024-11-20 10:31:05.618876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.188 [2024-11-20 10:31:05.618965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.188 [2024-11-20 10:31:05.618994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:02.188 [2024-11-20 10:31:05.619006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.188 [2024-11-20 10:31:05.621586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.188 [2024-11-20 10:31:05.621634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.188 pt1 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.188 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.447 malloc2 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.447 [2024-11-20 10:31:05.678944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.447 [2024-11-20 10:31:05.679078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.447 [2024-11-20 10:31:05.679129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:02.447 [2024-11-20 10:31:05.679162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.447 [2024-11-20 10:31:05.681688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.447 [2024-11-20 10:31:05.681772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.447 pt2 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.447 [2024-11-20 10:31:05.690987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.447 [2024-11-20 10:31:05.693041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.447 [2024-11-20 10:31:05.693290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:02.447 [2024-11-20 10:31:05.693345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.447 [2024-11-20 10:31:05.693684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.447 [2024-11-20 10:31:05.693911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:02.447 [2024-11-20 10:31:05.693959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:02.447 [2024-11-20 10:31:05.694194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.447 "name": "raid_bdev1", 00:08:02.447 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:02.447 "strip_size_kb": 64, 00:08:02.447 "state": "online", 00:08:02.447 "raid_level": "raid0", 00:08:02.447 "superblock": true, 00:08:02.447 "num_base_bdevs": 2, 00:08:02.447 "num_base_bdevs_discovered": 2, 00:08:02.447 "num_base_bdevs_operational": 2, 00:08:02.447 "base_bdevs_list": [ 00:08:02.447 { 00:08:02.447 "name": "pt1", 00:08:02.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.447 "is_configured": true, 00:08:02.447 "data_offset": 2048, 00:08:02.447 "data_size": 63488 00:08:02.447 }, 00:08:02.447 { 00:08:02.447 "name": "pt2", 00:08:02.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.447 "is_configured": true, 00:08:02.447 "data_offset": 2048, 00:08:02.447 "data_size": 63488 00:08:02.447 } 00:08:02.447 ] 00:08:02.447 }' 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.447 10:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.706 [2024-11-20 10:31:06.130641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.706 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.706 "name": "raid_bdev1", 00:08:02.706 "aliases": [ 00:08:02.706 "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1" 00:08:02.706 ], 00:08:02.706 "product_name": "Raid Volume", 00:08:02.706 "block_size": 512, 00:08:02.706 "num_blocks": 126976, 00:08:02.706 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:02.706 "assigned_rate_limits": { 00:08:02.706 "rw_ios_per_sec": 0, 00:08:02.706 "rw_mbytes_per_sec": 0, 00:08:02.706 "r_mbytes_per_sec": 0, 00:08:02.706 "w_mbytes_per_sec": 0 00:08:02.706 }, 00:08:02.706 "claimed": false, 00:08:02.706 "zoned": false, 00:08:02.706 "supported_io_types": { 00:08:02.706 "read": true, 00:08:02.706 "write": true, 00:08:02.706 "unmap": true, 00:08:02.706 "flush": true, 00:08:02.706 "reset": true, 00:08:02.707 "nvme_admin": false, 00:08:02.707 "nvme_io": false, 00:08:02.707 "nvme_io_md": false, 00:08:02.707 "write_zeroes": true, 00:08:02.707 "zcopy": false, 00:08:02.707 "get_zone_info": false, 00:08:02.707 "zone_management": false, 00:08:02.707 "zone_append": false, 00:08:02.707 "compare": false, 00:08:02.707 "compare_and_write": false, 00:08:02.707 "abort": false, 00:08:02.707 "seek_hole": false, 00:08:02.707 "seek_data": false, 00:08:02.707 "copy": false, 00:08:02.707 "nvme_iov_md": false 00:08:02.707 }, 00:08:02.707 "memory_domains": [ 00:08:02.707 { 00:08:02.707 "dma_device_id": "system", 00:08:02.707 "dma_device_type": 1 00:08:02.707 }, 00:08:02.707 { 00:08:02.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.707 "dma_device_type": 2 00:08:02.707 }, 00:08:02.707 { 00:08:02.707 "dma_device_id": "system", 00:08:02.707 "dma_device_type": 1 00:08:02.707 }, 00:08:02.707 { 00:08:02.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.707 "dma_device_type": 2 00:08:02.707 } 00:08:02.707 ], 00:08:02.707 "driver_specific": { 00:08:02.707 "raid": { 00:08:02.707 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:02.707 "strip_size_kb": 64, 00:08:02.707 "state": "online", 00:08:02.707 "raid_level": "raid0", 00:08:02.707 "superblock": true, 00:08:02.707 "num_base_bdevs": 2, 00:08:02.707 "num_base_bdevs_discovered": 2, 00:08:02.707 "num_base_bdevs_operational": 2, 00:08:02.707 "base_bdevs_list": [ 00:08:02.707 { 00:08:02.707 "name": "pt1", 00:08:02.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.707 "is_configured": true, 00:08:02.707 "data_offset": 2048, 00:08:02.707 "data_size": 63488 00:08:02.707 }, 00:08:02.707 { 00:08:02.707 "name": "pt2", 00:08:02.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.707 "is_configured": true, 00:08:02.707 "data_offset": 2048, 00:08:02.707 "data_size": 63488 00:08:02.707 } 00:08:02.707 ] 00:08:02.707 } 00:08:02.707 } 00:08:02.707 }' 00:08:02.707 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:02.966 pt2' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.966 [2024-11-20 10:31:06.370231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1 ']' 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.966 [2024-11-20 10:31:06.421791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.966 [2024-11-20 10:31:06.421877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.966 [2024-11-20 10:31:06.422019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.966 [2024-11-20 10:31:06.422109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.966 [2024-11-20 10:31:06.422174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.966 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.236 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.237 [2024-11-20 10:31:06.557631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:03.237 [2024-11-20 10:31:06.559852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:03.237 [2024-11-20 10:31:06.559930] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:03.237 [2024-11-20 10:31:06.559994] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:03.237 [2024-11-20 10:31:06.560012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.237 [2024-11-20 10:31:06.560026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:03.237 request: 00:08:03.237 { 00:08:03.237 "name": "raid_bdev1", 00:08:03.237 "raid_level": "raid0", 00:08:03.237 "base_bdevs": [ 00:08:03.237 "malloc1", 00:08:03.237 "malloc2" 00:08:03.237 ], 00:08:03.237 "strip_size_kb": 64, 00:08:03.237 "superblock": false, 00:08:03.237 "method": "bdev_raid_create", 00:08:03.237 "req_id": 1 00:08:03.237 } 00:08:03.237 Got JSON-RPC error response 00:08:03.237 response: 00:08:03.237 { 00:08:03.237 "code": -17, 00:08:03.237 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:03.237 } 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.237 [2024-11-20 10:31:06.633477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:03.237 [2024-11-20 10:31:06.633608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.237 [2024-11-20 10:31:06.633665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:03.237 [2024-11-20 10:31:06.633702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.237 [2024-11-20 10:31:06.636294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.237 [2024-11-20 10:31:06.636402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:03.237 [2024-11-20 10:31:06.636552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:03.237 [2024-11-20 10:31:06.636673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:03.237 pt1 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.237 "name": "raid_bdev1", 00:08:03.237 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:03.237 "strip_size_kb": 64, 00:08:03.237 "state": "configuring", 00:08:03.237 "raid_level": "raid0", 00:08:03.237 "superblock": true, 00:08:03.237 "num_base_bdevs": 2, 00:08:03.237 "num_base_bdevs_discovered": 1, 00:08:03.237 "num_base_bdevs_operational": 2, 00:08:03.237 "base_bdevs_list": [ 00:08:03.237 { 00:08:03.237 "name": "pt1", 00:08:03.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.237 "is_configured": true, 00:08:03.237 "data_offset": 2048, 00:08:03.237 "data_size": 63488 00:08:03.237 }, 00:08:03.237 { 00:08:03.237 "name": null, 00:08:03.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.237 "is_configured": false, 00:08:03.237 "data_offset": 2048, 00:08:03.237 "data_size": 63488 00:08:03.237 } 00:08:03.237 ] 00:08:03.237 }' 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.237 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 [2024-11-20 10:31:07.120668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.821 [2024-11-20 10:31:07.120754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.821 [2024-11-20 10:31:07.120780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:03.821 [2024-11-20 10:31:07.120792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.821 [2024-11-20 10:31:07.121329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.821 [2024-11-20 10:31:07.121354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.821 [2024-11-20 10:31:07.121527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.821 [2024-11-20 10:31:07.121596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.821 [2024-11-20 10:31:07.121770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.821 [2024-11-20 10:31:07.121819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:03.821 [2024-11-20 10:31:07.122117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:03.821 [2024-11-20 10:31:07.122338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.821 [2024-11-20 10:31:07.122400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:03.821 [2024-11-20 10:31:07.122605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.821 pt2 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.821 "name": "raid_bdev1", 00:08:03.821 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:03.821 "strip_size_kb": 64, 00:08:03.821 "state": "online", 00:08:03.821 "raid_level": "raid0", 00:08:03.821 "superblock": true, 00:08:03.821 "num_base_bdevs": 2, 00:08:03.821 "num_base_bdevs_discovered": 2, 00:08:03.821 "num_base_bdevs_operational": 2, 00:08:03.821 "base_bdevs_list": [ 00:08:03.821 { 00:08:03.821 "name": "pt1", 00:08:03.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.821 "is_configured": true, 00:08:03.821 "data_offset": 2048, 00:08:03.821 "data_size": 63488 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "name": "pt2", 00:08:03.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.821 "is_configured": true, 00:08:03.821 "data_offset": 2048, 00:08:03.821 "data_size": 63488 00:08:03.821 } 00:08:03.821 ] 00:08:03.821 }' 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.821 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.390 [2024-11-20 10:31:07.588175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.390 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.390 "name": "raid_bdev1", 00:08:04.390 "aliases": [ 00:08:04.390 "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1" 00:08:04.390 ], 00:08:04.390 "product_name": "Raid Volume", 00:08:04.390 "block_size": 512, 00:08:04.390 "num_blocks": 126976, 00:08:04.390 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:04.390 "assigned_rate_limits": { 00:08:04.390 "rw_ios_per_sec": 0, 00:08:04.390 "rw_mbytes_per_sec": 0, 00:08:04.390 "r_mbytes_per_sec": 0, 00:08:04.390 "w_mbytes_per_sec": 0 00:08:04.390 }, 00:08:04.390 "claimed": false, 00:08:04.390 "zoned": false, 00:08:04.390 "supported_io_types": { 00:08:04.390 "read": true, 00:08:04.390 "write": true, 00:08:04.390 "unmap": true, 00:08:04.390 "flush": true, 00:08:04.390 "reset": true, 00:08:04.390 "nvme_admin": false, 00:08:04.390 "nvme_io": false, 00:08:04.390 "nvme_io_md": false, 00:08:04.390 "write_zeroes": true, 00:08:04.390 "zcopy": false, 00:08:04.390 "get_zone_info": false, 00:08:04.390 "zone_management": false, 00:08:04.390 "zone_append": false, 00:08:04.390 "compare": false, 00:08:04.390 "compare_and_write": false, 00:08:04.390 "abort": false, 00:08:04.390 "seek_hole": false, 00:08:04.390 "seek_data": false, 00:08:04.390 "copy": false, 00:08:04.390 "nvme_iov_md": false 00:08:04.390 }, 00:08:04.390 "memory_domains": [ 00:08:04.390 { 00:08:04.390 "dma_device_id": "system", 00:08:04.390 "dma_device_type": 1 00:08:04.390 }, 00:08:04.390 { 00:08:04.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.390 "dma_device_type": 2 00:08:04.390 }, 00:08:04.390 { 00:08:04.390 "dma_device_id": "system", 00:08:04.390 "dma_device_type": 1 00:08:04.390 }, 00:08:04.390 { 00:08:04.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.390 "dma_device_type": 2 00:08:04.390 } 00:08:04.390 ], 00:08:04.390 "driver_specific": { 00:08:04.390 "raid": { 00:08:04.390 "uuid": "a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1", 00:08:04.390 "strip_size_kb": 64, 00:08:04.390 "state": "online", 00:08:04.390 "raid_level": "raid0", 00:08:04.390 "superblock": true, 00:08:04.390 "num_base_bdevs": 2, 00:08:04.390 "num_base_bdevs_discovered": 2, 00:08:04.390 "num_base_bdevs_operational": 2, 00:08:04.390 "base_bdevs_list": [ 00:08:04.390 { 00:08:04.390 "name": "pt1", 00:08:04.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.390 "is_configured": true, 00:08:04.391 "data_offset": 2048, 00:08:04.391 "data_size": 63488 00:08:04.391 }, 00:08:04.391 { 00:08:04.391 "name": "pt2", 00:08:04.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.391 "is_configured": true, 00:08:04.391 "data_offset": 2048, 00:08:04.391 "data_size": 63488 00:08:04.391 } 00:08:04.391 ] 00:08:04.391 } 00:08:04.391 } 00:08:04.391 }' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.391 pt2' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:04.391 [2024-11-20 10:31:07.811832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1 '!=' a10f3c70-f5e4-49f8-a10b-3c40fb1d01b1 ']' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61349 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61349 ']' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61349 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61349 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61349' 00:08:04.391 killing process with pid 61349 00:08:04.391 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61349 00:08:04.391 [2024-11-20 10:31:07.866867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.651 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61349 00:08:04.651 [2024-11-20 10:31:07.867066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.651 [2024-11-20 10:31:07.867127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.651 [2024-11-20 10:31:07.867141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.651 [2024-11-20 10:31:08.106253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.027 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:06.027 00:08:06.027 real 0m4.792s 00:08:06.027 user 0m6.718s 00:08:06.027 sys 0m0.694s 00:08:06.027 10:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.027 10:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.027 ************************************ 00:08:06.027 END TEST raid_superblock_test 00:08:06.027 ************************************ 00:08:06.027 10:31:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:06.027 10:31:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.027 10:31:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.027 10:31:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.027 ************************************ 00:08:06.027 START TEST raid_read_error_test 00:08:06.027 ************************************ 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fP3PDtAUkc 00:08:06.027 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61555 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61555 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61555 ']' 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.028 10:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.290 [2024-11-20 10:31:09.535003] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:06.290 [2024-11-20 10:31:09.535241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:08:06.290 [2024-11-20 10:31:09.719448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.548 [2024-11-20 10:31:09.857298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.807 [2024-11-20 10:31:10.096658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.807 [2024-11-20 10:31:10.096808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.066 BaseBdev1_malloc 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.066 true 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.066 [2024-11-20 10:31:10.514099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.066 [2024-11-20 10:31:10.514166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.066 [2024-11-20 10:31:10.514191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.066 [2024-11-20 10:31:10.514205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.066 [2024-11-20 10:31:10.516777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.066 [2024-11-20 10:31:10.516815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.066 BaseBdev1 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.066 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.325 BaseBdev2_malloc 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.325 true 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.325 [2024-11-20 10:31:10.585694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.325 [2024-11-20 10:31:10.585822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.325 [2024-11-20 10:31:10.585867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.325 [2024-11-20 10:31:10.585905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.325 [2024-11-20 10:31:10.588436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.325 [2024-11-20 10:31:10.588526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.325 BaseBdev2 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.325 [2024-11-20 10:31:10.597747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.325 [2024-11-20 10:31:10.599967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.325 [2024-11-20 10:31:10.600288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.325 [2024-11-20 10:31:10.600362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.325 [2024-11-20 10:31:10.600697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.325 [2024-11-20 10:31:10.600950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.325 [2024-11-20 10:31:10.601003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.325 [2024-11-20 10:31:10.601263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.325 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.325 "name": "raid_bdev1", 00:08:07.325 "uuid": "9e4641f0-3ce5-436d-ad58-fb79814f863f", 00:08:07.325 "strip_size_kb": 64, 00:08:07.325 "state": "online", 00:08:07.325 "raid_level": "raid0", 00:08:07.325 "superblock": true, 00:08:07.325 "num_base_bdevs": 2, 00:08:07.325 "num_base_bdevs_discovered": 2, 00:08:07.325 "num_base_bdevs_operational": 2, 00:08:07.325 "base_bdevs_list": [ 00:08:07.325 { 00:08:07.325 "name": "BaseBdev1", 00:08:07.325 "uuid": "13835921-c279-5e91-9b5e-dbea68de4783", 00:08:07.325 "is_configured": true, 00:08:07.325 "data_offset": 2048, 00:08:07.325 "data_size": 63488 00:08:07.326 }, 00:08:07.326 { 00:08:07.326 "name": "BaseBdev2", 00:08:07.326 "uuid": "bd446632-af02-599f-94d8-611caa021afa", 00:08:07.326 "is_configured": true, 00:08:07.326 "data_offset": 2048, 00:08:07.326 "data_size": 63488 00:08:07.326 } 00:08:07.326 ] 00:08:07.326 }' 00:08:07.326 10:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.326 10:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.585 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.585 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.844 [2024-11-20 10:31:11.158369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.807 "name": "raid_bdev1", 00:08:08.807 "uuid": "9e4641f0-3ce5-436d-ad58-fb79814f863f", 00:08:08.807 "strip_size_kb": 64, 00:08:08.807 "state": "online", 00:08:08.807 "raid_level": "raid0", 00:08:08.807 "superblock": true, 00:08:08.807 "num_base_bdevs": 2, 00:08:08.807 "num_base_bdevs_discovered": 2, 00:08:08.807 "num_base_bdevs_operational": 2, 00:08:08.807 "base_bdevs_list": [ 00:08:08.807 { 00:08:08.807 "name": "BaseBdev1", 00:08:08.807 "uuid": "13835921-c279-5e91-9b5e-dbea68de4783", 00:08:08.807 "is_configured": true, 00:08:08.807 "data_offset": 2048, 00:08:08.807 "data_size": 63488 00:08:08.807 }, 00:08:08.807 { 00:08:08.807 "name": "BaseBdev2", 00:08:08.807 "uuid": "bd446632-af02-599f-94d8-611caa021afa", 00:08:08.807 "is_configured": true, 00:08:08.807 "data_offset": 2048, 00:08:08.807 "data_size": 63488 00:08:08.807 } 00:08:08.807 ] 00:08:08.807 }' 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.807 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.066 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.066 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.325 [2024-11-20 10:31:12.547285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.325 [2024-11-20 10:31:12.547432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.325 [2024-11-20 10:31:12.550773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.325 [2024-11-20 10:31:12.550876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.325 [2024-11-20 10:31:12.550950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.325 [2024-11-20 10:31:12.551009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.325 { 00:08:09.325 "results": [ 00:08:09.325 { 00:08:09.325 "job": "raid_bdev1", 00:08:09.325 "core_mask": "0x1", 00:08:09.325 "workload": "randrw", 00:08:09.325 "percentage": 50, 00:08:09.325 "status": "finished", 00:08:09.325 "queue_depth": 1, 00:08:09.325 "io_size": 131072, 00:08:09.325 "runtime": 1.389809, 00:08:09.325 "iops": 13714.114673311225, 00:08:09.325 "mibps": 1714.2643341639032, 00:08:09.325 "io_failed": 1, 00:08:09.325 "io_timeout": 0, 00:08:09.325 "avg_latency_us": 100.90553000490954, 00:08:09.325 "min_latency_us": 29.959825327510917, 00:08:09.325 "max_latency_us": 1724.2550218340612 00:08:09.325 } 00:08:09.325 ], 00:08:09.325 "core_count": 1 00:08:09.325 } 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61555 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61555 ']' 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61555 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61555 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.325 killing process with pid 61555 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61555' 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61555 00:08:09.325 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61555 00:08:09.325 [2024-11-20 10:31:12.593210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.325 [2024-11-20 10:31:12.760497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fP3PDtAUkc 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:10.700 ************************************ 00:08:10.700 END TEST raid_read_error_test 00:08:10.700 ************************************ 00:08:10.700 00:08:10.700 real 0m4.715s 00:08:10.700 user 0m5.673s 00:08:10.700 sys 0m0.547s 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.700 10:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.959 10:31:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:10.959 10:31:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.959 10:31:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.959 10:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.959 ************************************ 00:08:10.959 START TEST raid_write_error_test 00:08:10.959 ************************************ 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x75M0P7ph2 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61706 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61706 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61706 ']' 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.959 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.959 [2024-11-20 10:31:14.299245] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:10.959 [2024-11-20 10:31:14.299514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61706 ] 00:08:11.218 [2024-11-20 10:31:14.480665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.218 [2024-11-20 10:31:14.616053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.479 [2024-11-20 10:31:14.857950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.479 [2024-11-20 10:31:14.858025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.739 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.739 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.739 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.739 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.739 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.739 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.999 BaseBdev1_malloc 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.999 true 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.999 [2024-11-20 10:31:15.281682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.999 [2024-11-20 10:31:15.281816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.999 [2024-11-20 10:31:15.281848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.999 [2024-11-20 10:31:15.281864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.999 [2024-11-20 10:31:15.284597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.999 [2024-11-20 10:31:15.284645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.999 BaseBdev1 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.999 BaseBdev2_malloc 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.999 true 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.999 [2024-11-20 10:31:15.355911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.999 [2024-11-20 10:31:15.355978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.999 [2024-11-20 10:31:15.355999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.999 [2024-11-20 10:31:15.356012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.999 [2024-11-20 10:31:15.358503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.999 [2024-11-20 10:31:15.358616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.999 BaseBdev2 00:08:11.999 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.000 [2024-11-20 10:31:15.367967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.000 [2024-11-20 10:31:15.370082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.000 [2024-11-20 10:31:15.370379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.000 [2024-11-20 10:31:15.370404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:12.000 [2024-11-20 10:31:15.370705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:12.000 [2024-11-20 10:31:15.370919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.000 [2024-11-20 10:31:15.370934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.000 [2024-11-20 10:31:15.371139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.000 "name": "raid_bdev1", 00:08:12.000 "uuid": "7670d315-2634-424f-9739-3542a10b422d", 00:08:12.000 "strip_size_kb": 64, 00:08:12.000 "state": "online", 00:08:12.000 "raid_level": "raid0", 00:08:12.000 "superblock": true, 00:08:12.000 "num_base_bdevs": 2, 00:08:12.000 "num_base_bdevs_discovered": 2, 00:08:12.000 "num_base_bdevs_operational": 2, 00:08:12.000 "base_bdevs_list": [ 00:08:12.000 { 00:08:12.000 "name": "BaseBdev1", 00:08:12.000 "uuid": "1fc912b9-54f8-5a8a-bdcd-923d2410cced", 00:08:12.000 "is_configured": true, 00:08:12.000 "data_offset": 2048, 00:08:12.000 "data_size": 63488 00:08:12.000 }, 00:08:12.000 { 00:08:12.000 "name": "BaseBdev2", 00:08:12.000 "uuid": "d007a5fa-71e7-51a2-8312-3cb6273300c8", 00:08:12.000 "is_configured": true, 00:08:12.000 "data_offset": 2048, 00:08:12.000 "data_size": 63488 00:08:12.000 } 00:08:12.000 ] 00:08:12.000 }' 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.000 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.569 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.569 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.569 [2024-11-20 10:31:15.908666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.508 "name": "raid_bdev1", 00:08:13.508 "uuid": "7670d315-2634-424f-9739-3542a10b422d", 00:08:13.508 "strip_size_kb": 64, 00:08:13.508 "state": "online", 00:08:13.508 "raid_level": "raid0", 00:08:13.508 "superblock": true, 00:08:13.508 "num_base_bdevs": 2, 00:08:13.508 "num_base_bdevs_discovered": 2, 00:08:13.508 "num_base_bdevs_operational": 2, 00:08:13.508 "base_bdevs_list": [ 00:08:13.508 { 00:08:13.508 "name": "BaseBdev1", 00:08:13.508 "uuid": "1fc912b9-54f8-5a8a-bdcd-923d2410cced", 00:08:13.508 "is_configured": true, 00:08:13.508 "data_offset": 2048, 00:08:13.508 "data_size": 63488 00:08:13.508 }, 00:08:13.508 { 00:08:13.508 "name": "BaseBdev2", 00:08:13.508 "uuid": "d007a5fa-71e7-51a2-8312-3cb6273300c8", 00:08:13.508 "is_configured": true, 00:08:13.508 "data_offset": 2048, 00:08:13.508 "data_size": 63488 00:08:13.508 } 00:08:13.508 ] 00:08:13.508 }' 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.508 10:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.077 [2024-11-20 10:31:17.285852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.077 [2024-11-20 10:31:17.285967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.077 [2024-11-20 10:31:17.289328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.077 [2024-11-20 10:31:17.289444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.077 [2024-11-20 10:31:17.289518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.077 [2024-11-20 10:31:17.289574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:14.077 { 00:08:14.077 "results": [ 00:08:14.077 { 00:08:14.077 "job": "raid_bdev1", 00:08:14.077 "core_mask": "0x1", 00:08:14.077 "workload": "randrw", 00:08:14.077 "percentage": 50, 00:08:14.077 "status": "finished", 00:08:14.077 "queue_depth": 1, 00:08:14.077 "io_size": 131072, 00:08:14.077 "runtime": 1.377858, 00:08:14.077 "iops": 13540.582556402764, 00:08:14.077 "mibps": 1692.5728195503455, 00:08:14.077 "io_failed": 1, 00:08:14.077 "io_timeout": 0, 00:08:14.077 "avg_latency_us": 102.28438100471789, 00:08:14.077 "min_latency_us": 29.959825327510917, 00:08:14.077 "max_latency_us": 1788.646288209607 00:08:14.077 } 00:08:14.077 ], 00:08:14.077 "core_count": 1 00:08:14.077 } 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61706 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61706 ']' 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61706 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61706 00:08:14.077 killing process with pid 61706 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61706' 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61706 00:08:14.077 [2024-11-20 10:31:17.335964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.077 10:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61706 00:08:14.077 [2024-11-20 10:31:17.498029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x75M0P7ph2 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:15.456 ************************************ 00:08:15.456 END TEST raid_write_error_test 00:08:15.456 ************************************ 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:15.456 00:08:15.456 real 0m4.709s 00:08:15.456 user 0m5.650s 00:08:15.456 sys 0m0.542s 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.456 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 10:31:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:15.722 10:31:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:15.722 10:31:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:15.722 10:31:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.722 10:31:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 ************************************ 00:08:15.722 START TEST raid_state_function_test 00:08:15.722 ************************************ 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61850 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61850' 00:08:15.722 Process raid pid: 61850 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61850 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61850 ']' 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.722 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 [2024-11-20 10:31:19.064270] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:15.722 [2024-11-20 10:31:19.064525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.980 [2024-11-20 10:31:19.244131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.980 [2024-11-20 10:31:19.380502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.237 [2024-11-20 10:31:19.636184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.237 [2024-11-20 10:31:19.636239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.805 10:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.805 10:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:16.805 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.805 10:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.805 10:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 [2024-11-20 10:31:20.002598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.805 [2024-11-20 10:31:20.002660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.805 [2024-11-20 10:31:20.002673] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.806 [2024-11-20 10:31:20.002685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.806 "name": "Existed_Raid", 00:08:16.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.806 "strip_size_kb": 64, 00:08:16.806 "state": "configuring", 00:08:16.806 "raid_level": "concat", 00:08:16.806 "superblock": false, 00:08:16.806 "num_base_bdevs": 2, 00:08:16.806 "num_base_bdevs_discovered": 0, 00:08:16.806 "num_base_bdevs_operational": 2, 00:08:16.806 "base_bdevs_list": [ 00:08:16.806 { 00:08:16.806 "name": "BaseBdev1", 00:08:16.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.806 "is_configured": false, 00:08:16.806 "data_offset": 0, 00:08:16.806 "data_size": 0 00:08:16.806 }, 00:08:16.806 { 00:08:16.806 "name": "BaseBdev2", 00:08:16.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.806 "is_configured": false, 00:08:16.806 "data_offset": 0, 00:08:16.806 "data_size": 0 00:08:16.806 } 00:08:16.806 ] 00:08:16.806 }' 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.806 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.065 [2024-11-20 10:31:20.437839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.065 [2024-11-20 10:31:20.437936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.065 [2024-11-20 10:31:20.449823] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.065 [2024-11-20 10:31:20.449928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.065 [2024-11-20 10:31:20.449962] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.065 [2024-11-20 10:31:20.449993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.065 [2024-11-20 10:31:20.503907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.065 BaseBdev1 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:17.065 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.066 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.066 [ 00:08:17.066 { 00:08:17.066 "name": "BaseBdev1", 00:08:17.066 "aliases": [ 00:08:17.066 "baca41d1-889c-49c6-9ca8-bc4a516a72f3" 00:08:17.066 ], 00:08:17.066 "product_name": "Malloc disk", 00:08:17.066 "block_size": 512, 00:08:17.066 "num_blocks": 65536, 00:08:17.066 "uuid": "baca41d1-889c-49c6-9ca8-bc4a516a72f3", 00:08:17.066 "assigned_rate_limits": { 00:08:17.066 "rw_ios_per_sec": 0, 00:08:17.066 "rw_mbytes_per_sec": 0, 00:08:17.066 "r_mbytes_per_sec": 0, 00:08:17.066 "w_mbytes_per_sec": 0 00:08:17.066 }, 00:08:17.066 "claimed": true, 00:08:17.066 "claim_type": "exclusive_write", 00:08:17.066 "zoned": false, 00:08:17.066 "supported_io_types": { 00:08:17.066 "read": true, 00:08:17.066 "write": true, 00:08:17.066 "unmap": true, 00:08:17.066 "flush": true, 00:08:17.066 "reset": true, 00:08:17.066 "nvme_admin": false, 00:08:17.066 "nvme_io": false, 00:08:17.066 "nvme_io_md": false, 00:08:17.066 "write_zeroes": true, 00:08:17.066 "zcopy": true, 00:08:17.066 "get_zone_info": false, 00:08:17.066 "zone_management": false, 00:08:17.066 "zone_append": false, 00:08:17.066 "compare": false, 00:08:17.327 "compare_and_write": false, 00:08:17.327 "abort": true, 00:08:17.327 "seek_hole": false, 00:08:17.327 "seek_data": false, 00:08:17.327 "copy": true, 00:08:17.327 "nvme_iov_md": false 00:08:17.327 }, 00:08:17.327 "memory_domains": [ 00:08:17.327 { 00:08:17.327 "dma_device_id": "system", 00:08:17.327 "dma_device_type": 1 00:08:17.327 }, 00:08:17.327 { 00:08:17.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.327 "dma_device_type": 2 00:08:17.327 } 00:08:17.327 ], 00:08:17.327 "driver_specific": {} 00:08:17.327 } 00:08:17.327 ] 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.327 "name": "Existed_Raid", 00:08:17.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.327 "strip_size_kb": 64, 00:08:17.327 "state": "configuring", 00:08:17.327 "raid_level": "concat", 00:08:17.327 "superblock": false, 00:08:17.327 "num_base_bdevs": 2, 00:08:17.327 "num_base_bdevs_discovered": 1, 00:08:17.327 "num_base_bdevs_operational": 2, 00:08:17.327 "base_bdevs_list": [ 00:08:17.327 { 00:08:17.327 "name": "BaseBdev1", 00:08:17.327 "uuid": "baca41d1-889c-49c6-9ca8-bc4a516a72f3", 00:08:17.327 "is_configured": true, 00:08:17.327 "data_offset": 0, 00:08:17.327 "data_size": 65536 00:08:17.327 }, 00:08:17.327 { 00:08:17.327 "name": "BaseBdev2", 00:08:17.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.327 "is_configured": false, 00:08:17.327 "data_offset": 0, 00:08:17.327 "data_size": 0 00:08:17.327 } 00:08:17.327 ] 00:08:17.327 }' 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.327 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.587 [2024-11-20 10:31:20.963382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.587 [2024-11-20 10:31:20.963438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.587 [2024-11-20 10:31:20.975431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.587 [2024-11-20 10:31:20.977578] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.587 [2024-11-20 10:31:20.977670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.587 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.587 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.587 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.587 "name": "Existed_Raid", 00:08:17.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.588 "strip_size_kb": 64, 00:08:17.588 "state": "configuring", 00:08:17.588 "raid_level": "concat", 00:08:17.588 "superblock": false, 00:08:17.588 "num_base_bdevs": 2, 00:08:17.588 "num_base_bdevs_discovered": 1, 00:08:17.588 "num_base_bdevs_operational": 2, 00:08:17.588 "base_bdevs_list": [ 00:08:17.588 { 00:08:17.588 "name": "BaseBdev1", 00:08:17.588 "uuid": "baca41d1-889c-49c6-9ca8-bc4a516a72f3", 00:08:17.588 "is_configured": true, 00:08:17.588 "data_offset": 0, 00:08:17.588 "data_size": 65536 00:08:17.588 }, 00:08:17.588 { 00:08:17.588 "name": "BaseBdev2", 00:08:17.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.588 "is_configured": false, 00:08:17.588 "data_offset": 0, 00:08:17.588 "data_size": 0 00:08:17.588 } 00:08:17.588 ] 00:08:17.588 }' 00:08:17.588 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.588 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.218 [2024-11-20 10:31:21.511269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.218 [2024-11-20 10:31:21.511331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.218 [2024-11-20 10:31:21.511341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:18.218 [2024-11-20 10:31:21.511693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.218 [2024-11-20 10:31:21.511891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.218 [2024-11-20 10:31:21.511915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:18.218 [2024-11-20 10:31:21.512218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.218 BaseBdev2 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.218 [ 00:08:18.218 { 00:08:18.218 "name": "BaseBdev2", 00:08:18.218 "aliases": [ 00:08:18.218 "7b9e874d-e61b-4f6d-9ca5-240606bc1c49" 00:08:18.218 ], 00:08:18.218 "product_name": "Malloc disk", 00:08:18.218 "block_size": 512, 00:08:18.218 "num_blocks": 65536, 00:08:18.218 "uuid": "7b9e874d-e61b-4f6d-9ca5-240606bc1c49", 00:08:18.218 "assigned_rate_limits": { 00:08:18.218 "rw_ios_per_sec": 0, 00:08:18.218 "rw_mbytes_per_sec": 0, 00:08:18.218 "r_mbytes_per_sec": 0, 00:08:18.218 "w_mbytes_per_sec": 0 00:08:18.218 }, 00:08:18.218 "claimed": true, 00:08:18.218 "claim_type": "exclusive_write", 00:08:18.218 "zoned": false, 00:08:18.218 "supported_io_types": { 00:08:18.218 "read": true, 00:08:18.218 "write": true, 00:08:18.218 "unmap": true, 00:08:18.218 "flush": true, 00:08:18.218 "reset": true, 00:08:18.218 "nvme_admin": false, 00:08:18.218 "nvme_io": false, 00:08:18.218 "nvme_io_md": false, 00:08:18.218 "write_zeroes": true, 00:08:18.218 "zcopy": true, 00:08:18.218 "get_zone_info": false, 00:08:18.218 "zone_management": false, 00:08:18.218 "zone_append": false, 00:08:18.218 "compare": false, 00:08:18.218 "compare_and_write": false, 00:08:18.218 "abort": true, 00:08:18.218 "seek_hole": false, 00:08:18.218 "seek_data": false, 00:08:18.218 "copy": true, 00:08:18.218 "nvme_iov_md": false 00:08:18.218 }, 00:08:18.218 "memory_domains": [ 00:08:18.218 { 00:08:18.218 "dma_device_id": "system", 00:08:18.218 "dma_device_type": 1 00:08:18.218 }, 00:08:18.218 { 00:08:18.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.218 "dma_device_type": 2 00:08:18.218 } 00:08:18.218 ], 00:08:18.218 "driver_specific": {} 00:08:18.218 } 00:08:18.218 ] 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.218 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.218 "name": "Existed_Raid", 00:08:18.218 "uuid": "5f8a3ad1-37c8-44aa-92dc-eff014a9147b", 00:08:18.218 "strip_size_kb": 64, 00:08:18.218 "state": "online", 00:08:18.218 "raid_level": "concat", 00:08:18.219 "superblock": false, 00:08:18.219 "num_base_bdevs": 2, 00:08:18.219 "num_base_bdevs_discovered": 2, 00:08:18.219 "num_base_bdevs_operational": 2, 00:08:18.219 "base_bdevs_list": [ 00:08:18.219 { 00:08:18.219 "name": "BaseBdev1", 00:08:18.219 "uuid": "baca41d1-889c-49c6-9ca8-bc4a516a72f3", 00:08:18.219 "is_configured": true, 00:08:18.219 "data_offset": 0, 00:08:18.219 "data_size": 65536 00:08:18.219 }, 00:08:18.219 { 00:08:18.219 "name": "BaseBdev2", 00:08:18.219 "uuid": "7b9e874d-e61b-4f6d-9ca5-240606bc1c49", 00:08:18.219 "is_configured": true, 00:08:18.219 "data_offset": 0, 00:08:18.219 "data_size": 65536 00:08:18.219 } 00:08:18.219 ] 00:08:18.219 }' 00:08:18.219 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.219 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.787 [2024-11-20 10:31:22.042791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.787 "name": "Existed_Raid", 00:08:18.787 "aliases": [ 00:08:18.787 "5f8a3ad1-37c8-44aa-92dc-eff014a9147b" 00:08:18.787 ], 00:08:18.787 "product_name": "Raid Volume", 00:08:18.787 "block_size": 512, 00:08:18.787 "num_blocks": 131072, 00:08:18.787 "uuid": "5f8a3ad1-37c8-44aa-92dc-eff014a9147b", 00:08:18.787 "assigned_rate_limits": { 00:08:18.787 "rw_ios_per_sec": 0, 00:08:18.787 "rw_mbytes_per_sec": 0, 00:08:18.787 "r_mbytes_per_sec": 0, 00:08:18.787 "w_mbytes_per_sec": 0 00:08:18.787 }, 00:08:18.787 "claimed": false, 00:08:18.787 "zoned": false, 00:08:18.787 "supported_io_types": { 00:08:18.787 "read": true, 00:08:18.787 "write": true, 00:08:18.787 "unmap": true, 00:08:18.787 "flush": true, 00:08:18.787 "reset": true, 00:08:18.787 "nvme_admin": false, 00:08:18.787 "nvme_io": false, 00:08:18.787 "nvme_io_md": false, 00:08:18.787 "write_zeroes": true, 00:08:18.787 "zcopy": false, 00:08:18.787 "get_zone_info": false, 00:08:18.787 "zone_management": false, 00:08:18.787 "zone_append": false, 00:08:18.787 "compare": false, 00:08:18.787 "compare_and_write": false, 00:08:18.787 "abort": false, 00:08:18.787 "seek_hole": false, 00:08:18.787 "seek_data": false, 00:08:18.787 "copy": false, 00:08:18.787 "nvme_iov_md": false 00:08:18.787 }, 00:08:18.787 "memory_domains": [ 00:08:18.787 { 00:08:18.787 "dma_device_id": "system", 00:08:18.787 "dma_device_type": 1 00:08:18.787 }, 00:08:18.787 { 00:08:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.787 "dma_device_type": 2 00:08:18.787 }, 00:08:18.787 { 00:08:18.787 "dma_device_id": "system", 00:08:18.787 "dma_device_type": 1 00:08:18.787 }, 00:08:18.787 { 00:08:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.787 "dma_device_type": 2 00:08:18.787 } 00:08:18.787 ], 00:08:18.787 "driver_specific": { 00:08:18.787 "raid": { 00:08:18.787 "uuid": "5f8a3ad1-37c8-44aa-92dc-eff014a9147b", 00:08:18.787 "strip_size_kb": 64, 00:08:18.787 "state": "online", 00:08:18.787 "raid_level": "concat", 00:08:18.787 "superblock": false, 00:08:18.787 "num_base_bdevs": 2, 00:08:18.787 "num_base_bdevs_discovered": 2, 00:08:18.787 "num_base_bdevs_operational": 2, 00:08:18.787 "base_bdevs_list": [ 00:08:18.787 { 00:08:18.787 "name": "BaseBdev1", 00:08:18.787 "uuid": "baca41d1-889c-49c6-9ca8-bc4a516a72f3", 00:08:18.787 "is_configured": true, 00:08:18.787 "data_offset": 0, 00:08:18.787 "data_size": 65536 00:08:18.787 }, 00:08:18.787 { 00:08:18.787 "name": "BaseBdev2", 00:08:18.787 "uuid": "7b9e874d-e61b-4f6d-9ca5-240606bc1c49", 00:08:18.787 "is_configured": true, 00:08:18.787 "data_offset": 0, 00:08:18.787 "data_size": 65536 00:08:18.787 } 00:08:18.787 ] 00:08:18.787 } 00:08:18.787 } 00:08:18.787 }' 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.787 BaseBdev2' 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.787 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.788 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.048 [2024-11-20 10:31:22.306067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.048 [2024-11-20 10:31:22.306107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.048 [2024-11-20 10:31:22.306166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.048 "name": "Existed_Raid", 00:08:19.048 "uuid": "5f8a3ad1-37c8-44aa-92dc-eff014a9147b", 00:08:19.048 "strip_size_kb": 64, 00:08:19.048 "state": "offline", 00:08:19.048 "raid_level": "concat", 00:08:19.048 "superblock": false, 00:08:19.048 "num_base_bdevs": 2, 00:08:19.048 "num_base_bdevs_discovered": 1, 00:08:19.048 "num_base_bdevs_operational": 1, 00:08:19.048 "base_bdevs_list": [ 00:08:19.048 { 00:08:19.048 "name": null, 00:08:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.048 "is_configured": false, 00:08:19.048 "data_offset": 0, 00:08:19.048 "data_size": 65536 00:08:19.048 }, 00:08:19.048 { 00:08:19.048 "name": "BaseBdev2", 00:08:19.048 "uuid": "7b9e874d-e61b-4f6d-9ca5-240606bc1c49", 00:08:19.048 "is_configured": true, 00:08:19.048 "data_offset": 0, 00:08:19.048 "data_size": 65536 00:08:19.048 } 00:08:19.048 ] 00:08:19.048 }' 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.048 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.618 [2024-11-20 10:31:22.873826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.618 [2024-11-20 10:31:22.873952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.618 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61850 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61850 ']' 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61850 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61850 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61850' 00:08:19.618 killing process with pid 61850 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61850 00:08:19.618 [2024-11-20 10:31:23.089814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.618 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61850 00:08:19.878 [2024-11-20 10:31:23.110075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.259 00:08:21.259 real 0m5.473s 00:08:21.259 user 0m7.860s 00:08:21.259 sys 0m0.830s 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.259 ************************************ 00:08:21.259 END TEST raid_state_function_test 00:08:21.259 ************************************ 00:08:21.259 10:31:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:21.259 10:31:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.259 10:31:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.259 10:31:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.259 ************************************ 00:08:21.259 START TEST raid_state_function_test_sb 00:08:21.259 ************************************ 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.259 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62103 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62103' 00:08:21.260 Process raid pid: 62103 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62103 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62103 ']' 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.260 10:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.260 [2024-11-20 10:31:24.614897] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:21.260 [2024-11-20 10:31:24.615116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.519 [2024-11-20 10:31:24.800663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.519 [2024-11-20 10:31:24.953248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.780 [2024-11-20 10:31:25.206619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.780 [2024-11-20 10:31:25.206670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.118 [2024-11-20 10:31:25.527109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.118 [2024-11-20 10:31:25.527171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.118 [2024-11-20 10:31:25.527184] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.118 [2024-11-20 10:31:25.527197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.118 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.377 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.377 "name": "Existed_Raid", 00:08:22.377 "uuid": "7397e307-67cc-4bde-8e6a-0044c4d1fe31", 00:08:22.377 "strip_size_kb": 64, 00:08:22.377 "state": "configuring", 00:08:22.377 "raid_level": "concat", 00:08:22.377 "superblock": true, 00:08:22.378 "num_base_bdevs": 2, 00:08:22.378 "num_base_bdevs_discovered": 0, 00:08:22.378 "num_base_bdevs_operational": 2, 00:08:22.378 "base_bdevs_list": [ 00:08:22.378 { 00:08:22.378 "name": "BaseBdev1", 00:08:22.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.378 "is_configured": false, 00:08:22.378 "data_offset": 0, 00:08:22.378 "data_size": 0 00:08:22.378 }, 00:08:22.378 { 00:08:22.378 "name": "BaseBdev2", 00:08:22.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.378 "is_configured": false, 00:08:22.378 "data_offset": 0, 00:08:22.378 "data_size": 0 00:08:22.378 } 00:08:22.378 ] 00:08:22.378 }' 00:08:22.378 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.378 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 [2024-11-20 10:31:25.970446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.638 [2024-11-20 10:31:25.970540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 [2024-11-20 10:31:25.982432] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.638 [2024-11-20 10:31:25.982524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.638 [2024-11-20 10:31:25.982559] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.638 [2024-11-20 10:31:25.982591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.638 10:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 [2024-11-20 10:31:26.038622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.638 BaseBdev1 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 [ 00:08:22.638 { 00:08:22.638 "name": "BaseBdev1", 00:08:22.638 "aliases": [ 00:08:22.638 "6534a9d3-2b71-4db1-9e39-a08cf5911a66" 00:08:22.638 ], 00:08:22.638 "product_name": "Malloc disk", 00:08:22.638 "block_size": 512, 00:08:22.638 "num_blocks": 65536, 00:08:22.638 "uuid": "6534a9d3-2b71-4db1-9e39-a08cf5911a66", 00:08:22.638 "assigned_rate_limits": { 00:08:22.638 "rw_ios_per_sec": 0, 00:08:22.638 "rw_mbytes_per_sec": 0, 00:08:22.638 "r_mbytes_per_sec": 0, 00:08:22.638 "w_mbytes_per_sec": 0 00:08:22.638 }, 00:08:22.638 "claimed": true, 00:08:22.638 "claim_type": "exclusive_write", 00:08:22.638 "zoned": false, 00:08:22.638 "supported_io_types": { 00:08:22.638 "read": true, 00:08:22.638 "write": true, 00:08:22.638 "unmap": true, 00:08:22.638 "flush": true, 00:08:22.638 "reset": true, 00:08:22.638 "nvme_admin": false, 00:08:22.638 "nvme_io": false, 00:08:22.638 "nvme_io_md": false, 00:08:22.638 "write_zeroes": true, 00:08:22.638 "zcopy": true, 00:08:22.638 "get_zone_info": false, 00:08:22.638 "zone_management": false, 00:08:22.638 "zone_append": false, 00:08:22.638 "compare": false, 00:08:22.638 "compare_and_write": false, 00:08:22.638 "abort": true, 00:08:22.638 "seek_hole": false, 00:08:22.638 "seek_data": false, 00:08:22.638 "copy": true, 00:08:22.638 "nvme_iov_md": false 00:08:22.638 }, 00:08:22.638 "memory_domains": [ 00:08:22.638 { 00:08:22.638 "dma_device_id": "system", 00:08:22.638 "dma_device_type": 1 00:08:22.638 }, 00:08:22.638 { 00:08:22.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.638 "dma_device_type": 2 00:08:22.638 } 00:08:22.638 ], 00:08:22.638 "driver_specific": {} 00:08:22.638 } 00:08:22.638 ] 00:08:22.638 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.639 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.898 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.898 "name": "Existed_Raid", 00:08:22.898 "uuid": "f98b85b8-847b-4f47-a336-911d67d9c534", 00:08:22.898 "strip_size_kb": 64, 00:08:22.898 "state": "configuring", 00:08:22.898 "raid_level": "concat", 00:08:22.898 "superblock": true, 00:08:22.898 "num_base_bdevs": 2, 00:08:22.898 "num_base_bdevs_discovered": 1, 00:08:22.898 "num_base_bdevs_operational": 2, 00:08:22.898 "base_bdevs_list": [ 00:08:22.898 { 00:08:22.898 "name": "BaseBdev1", 00:08:22.898 "uuid": "6534a9d3-2b71-4db1-9e39-a08cf5911a66", 00:08:22.898 "is_configured": true, 00:08:22.898 "data_offset": 2048, 00:08:22.898 "data_size": 63488 00:08:22.898 }, 00:08:22.898 { 00:08:22.898 "name": "BaseBdev2", 00:08:22.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.898 "is_configured": false, 00:08:22.898 "data_offset": 0, 00:08:22.898 "data_size": 0 00:08:22.898 } 00:08:22.898 ] 00:08:22.898 }' 00:08:22.898 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.898 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.158 [2024-11-20 10:31:26.541884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.158 [2024-11-20 10:31:26.542023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.158 [2024-11-20 10:31:26.549954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.158 [2024-11-20 10:31:26.552199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.158 [2024-11-20 10:31:26.552302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.158 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.158 "name": "Existed_Raid", 00:08:23.158 "uuid": "75bf2094-487c-43e2-93f2-edad4969cd97", 00:08:23.158 "strip_size_kb": 64, 00:08:23.158 "state": "configuring", 00:08:23.158 "raid_level": "concat", 00:08:23.158 "superblock": true, 00:08:23.158 "num_base_bdevs": 2, 00:08:23.158 "num_base_bdevs_discovered": 1, 00:08:23.159 "num_base_bdevs_operational": 2, 00:08:23.159 "base_bdevs_list": [ 00:08:23.159 { 00:08:23.159 "name": "BaseBdev1", 00:08:23.159 "uuid": "6534a9d3-2b71-4db1-9e39-a08cf5911a66", 00:08:23.159 "is_configured": true, 00:08:23.159 "data_offset": 2048, 00:08:23.159 "data_size": 63488 00:08:23.159 }, 00:08:23.159 { 00:08:23.159 "name": "BaseBdev2", 00:08:23.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.159 "is_configured": false, 00:08:23.159 "data_offset": 0, 00:08:23.159 "data_size": 0 00:08:23.159 } 00:08:23.159 ] 00:08:23.159 }' 00:08:23.159 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.159 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 10:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.729 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.729 10:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 [2024-11-20 10:31:27.037705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.729 [2024-11-20 10:31:27.037994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.729 [2024-11-20 10:31:27.038011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:23.729 [2024-11-20 10:31:27.038306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:23.729 BaseBdev2 00:08:23.729 [2024-11-20 10:31:27.038500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.729 [2024-11-20 10:31:27.038517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:23.729 [2024-11-20 10:31:27.038703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 [ 00:08:23.729 { 00:08:23.729 "name": "BaseBdev2", 00:08:23.729 "aliases": [ 00:08:23.729 "85d50766-5acf-4353-96a1-9facf2ba5dc6" 00:08:23.729 ], 00:08:23.729 "product_name": "Malloc disk", 00:08:23.729 "block_size": 512, 00:08:23.729 "num_blocks": 65536, 00:08:23.729 "uuid": "85d50766-5acf-4353-96a1-9facf2ba5dc6", 00:08:23.729 "assigned_rate_limits": { 00:08:23.729 "rw_ios_per_sec": 0, 00:08:23.729 "rw_mbytes_per_sec": 0, 00:08:23.729 "r_mbytes_per_sec": 0, 00:08:23.729 "w_mbytes_per_sec": 0 00:08:23.729 }, 00:08:23.729 "claimed": true, 00:08:23.729 "claim_type": "exclusive_write", 00:08:23.729 "zoned": false, 00:08:23.729 "supported_io_types": { 00:08:23.729 "read": true, 00:08:23.729 "write": true, 00:08:23.729 "unmap": true, 00:08:23.729 "flush": true, 00:08:23.729 "reset": true, 00:08:23.729 "nvme_admin": false, 00:08:23.729 "nvme_io": false, 00:08:23.729 "nvme_io_md": false, 00:08:23.729 "write_zeroes": true, 00:08:23.729 "zcopy": true, 00:08:23.729 "get_zone_info": false, 00:08:23.729 "zone_management": false, 00:08:23.729 "zone_append": false, 00:08:23.729 "compare": false, 00:08:23.729 "compare_and_write": false, 00:08:23.729 "abort": true, 00:08:23.729 "seek_hole": false, 00:08:23.729 "seek_data": false, 00:08:23.729 "copy": true, 00:08:23.729 "nvme_iov_md": false 00:08:23.729 }, 00:08:23.729 "memory_domains": [ 00:08:23.729 { 00:08:23.729 "dma_device_id": "system", 00:08:23.729 "dma_device_type": 1 00:08:23.729 }, 00:08:23.729 { 00:08:23.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.729 "dma_device_type": 2 00:08:23.729 } 00:08:23.729 ], 00:08:23.729 "driver_specific": {} 00:08:23.729 } 00:08:23.729 ] 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.729 "name": "Existed_Raid", 00:08:23.729 "uuid": "75bf2094-487c-43e2-93f2-edad4969cd97", 00:08:23.729 "strip_size_kb": 64, 00:08:23.729 "state": "online", 00:08:23.729 "raid_level": "concat", 00:08:23.729 "superblock": true, 00:08:23.729 "num_base_bdevs": 2, 00:08:23.729 "num_base_bdevs_discovered": 2, 00:08:23.729 "num_base_bdevs_operational": 2, 00:08:23.729 "base_bdevs_list": [ 00:08:23.729 { 00:08:23.729 "name": "BaseBdev1", 00:08:23.729 "uuid": "6534a9d3-2b71-4db1-9e39-a08cf5911a66", 00:08:23.729 "is_configured": true, 00:08:23.729 "data_offset": 2048, 00:08:23.729 "data_size": 63488 00:08:23.729 }, 00:08:23.729 { 00:08:23.729 "name": "BaseBdev2", 00:08:23.729 "uuid": "85d50766-5acf-4353-96a1-9facf2ba5dc6", 00:08:23.729 "is_configured": true, 00:08:23.729 "data_offset": 2048, 00:08:23.729 "data_size": 63488 00:08:23.729 } 00:08:23.729 ] 00:08:23.729 }' 00:08:23.729 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.730 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.298 [2024-11-20 10:31:27.601164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.298 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.298 "name": "Existed_Raid", 00:08:24.298 "aliases": [ 00:08:24.298 "75bf2094-487c-43e2-93f2-edad4969cd97" 00:08:24.298 ], 00:08:24.298 "product_name": "Raid Volume", 00:08:24.298 "block_size": 512, 00:08:24.298 "num_blocks": 126976, 00:08:24.298 "uuid": "75bf2094-487c-43e2-93f2-edad4969cd97", 00:08:24.298 "assigned_rate_limits": { 00:08:24.298 "rw_ios_per_sec": 0, 00:08:24.298 "rw_mbytes_per_sec": 0, 00:08:24.298 "r_mbytes_per_sec": 0, 00:08:24.298 "w_mbytes_per_sec": 0 00:08:24.298 }, 00:08:24.298 "claimed": false, 00:08:24.298 "zoned": false, 00:08:24.298 "supported_io_types": { 00:08:24.298 "read": true, 00:08:24.298 "write": true, 00:08:24.298 "unmap": true, 00:08:24.298 "flush": true, 00:08:24.298 "reset": true, 00:08:24.298 "nvme_admin": false, 00:08:24.298 "nvme_io": false, 00:08:24.298 "nvme_io_md": false, 00:08:24.298 "write_zeroes": true, 00:08:24.298 "zcopy": false, 00:08:24.298 "get_zone_info": false, 00:08:24.298 "zone_management": false, 00:08:24.298 "zone_append": false, 00:08:24.298 "compare": false, 00:08:24.298 "compare_and_write": false, 00:08:24.298 "abort": false, 00:08:24.298 "seek_hole": false, 00:08:24.298 "seek_data": false, 00:08:24.298 "copy": false, 00:08:24.298 "nvme_iov_md": false 00:08:24.298 }, 00:08:24.298 "memory_domains": [ 00:08:24.298 { 00:08:24.298 "dma_device_id": "system", 00:08:24.298 "dma_device_type": 1 00:08:24.298 }, 00:08:24.298 { 00:08:24.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.298 "dma_device_type": 2 00:08:24.298 }, 00:08:24.298 { 00:08:24.298 "dma_device_id": "system", 00:08:24.298 "dma_device_type": 1 00:08:24.298 }, 00:08:24.298 { 00:08:24.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.298 "dma_device_type": 2 00:08:24.298 } 00:08:24.298 ], 00:08:24.298 "driver_specific": { 00:08:24.298 "raid": { 00:08:24.298 "uuid": "75bf2094-487c-43e2-93f2-edad4969cd97", 00:08:24.299 "strip_size_kb": 64, 00:08:24.299 "state": "online", 00:08:24.299 "raid_level": "concat", 00:08:24.299 "superblock": true, 00:08:24.299 "num_base_bdevs": 2, 00:08:24.299 "num_base_bdevs_discovered": 2, 00:08:24.299 "num_base_bdevs_operational": 2, 00:08:24.299 "base_bdevs_list": [ 00:08:24.299 { 00:08:24.299 "name": "BaseBdev1", 00:08:24.299 "uuid": "6534a9d3-2b71-4db1-9e39-a08cf5911a66", 00:08:24.299 "is_configured": true, 00:08:24.299 "data_offset": 2048, 00:08:24.299 "data_size": 63488 00:08:24.299 }, 00:08:24.299 { 00:08:24.299 "name": "BaseBdev2", 00:08:24.299 "uuid": "85d50766-5acf-4353-96a1-9facf2ba5dc6", 00:08:24.299 "is_configured": true, 00:08:24.299 "data_offset": 2048, 00:08:24.299 "data_size": 63488 00:08:24.299 } 00:08:24.299 ] 00:08:24.299 } 00:08:24.299 } 00:08:24.299 }' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.299 BaseBdev2' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.299 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.558 [2024-11-20 10:31:27.808545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.558 [2024-11-20 10:31:27.808636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.558 [2024-11-20 10:31:27.808699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.558 "name": "Existed_Raid", 00:08:24.558 "uuid": "75bf2094-487c-43e2-93f2-edad4969cd97", 00:08:24.558 "strip_size_kb": 64, 00:08:24.558 "state": "offline", 00:08:24.558 "raid_level": "concat", 00:08:24.558 "superblock": true, 00:08:24.558 "num_base_bdevs": 2, 00:08:24.558 "num_base_bdevs_discovered": 1, 00:08:24.558 "num_base_bdevs_operational": 1, 00:08:24.558 "base_bdevs_list": [ 00:08:24.558 { 00:08:24.558 "name": null, 00:08:24.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.558 "is_configured": false, 00:08:24.558 "data_offset": 0, 00:08:24.558 "data_size": 63488 00:08:24.558 }, 00:08:24.558 { 00:08:24.558 "name": "BaseBdev2", 00:08:24.558 "uuid": "85d50766-5acf-4353-96a1-9facf2ba5dc6", 00:08:24.558 "is_configured": true, 00:08:24.558 "data_offset": 2048, 00:08:24.558 "data_size": 63488 00:08:24.558 } 00:08:24.558 ] 00:08:24.558 }' 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.558 10:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 [2024-11-20 10:31:28.443137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.128 [2024-11-20 10:31:28.443280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62103 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62103 ']' 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62103 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62103 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62103' 00:08:25.399 killing process with pid 62103 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62103 00:08:25.399 [2024-11-20 10:31:28.638027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.399 10:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62103 00:08:25.399 [2024-11-20 10:31:28.658860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.777 10:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.777 00:08:26.777 real 0m5.440s 00:08:26.777 user 0m7.826s 00:08:26.777 sys 0m0.857s 00:08:26.777 10:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.777 10:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.777 ************************************ 00:08:26.777 END TEST raid_state_function_test_sb 00:08:26.777 ************************************ 00:08:26.777 10:31:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:26.777 10:31:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:26.777 10:31:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.777 10:31:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.777 ************************************ 00:08:26.777 START TEST raid_superblock_test 00:08:26.777 ************************************ 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:26.777 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62355 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62355 00:08:26.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62355 ']' 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.778 10:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.778 [2024-11-20 10:31:30.112341] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:26.778 [2024-11-20 10:31:30.112492] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62355 ] 00:08:27.038 [2024-11-20 10:31:30.289945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.038 [2024-11-20 10:31:30.421985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.298 [2024-11-20 10:31:30.652353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.298 [2024-11-20 10:31:30.652528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.557 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.816 malloc1 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.816 [2024-11-20 10:31:31.085905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.816 [2024-11-20 10:31:31.085982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.816 [2024-11-20 10:31:31.086010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:27.816 [2024-11-20 10:31:31.086022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.816 [2024-11-20 10:31:31.088545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.816 [2024-11-20 10:31:31.088590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.816 pt1 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:27.816 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.817 malloc2 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.817 [2024-11-20 10:31:31.148737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.817 [2024-11-20 10:31:31.148859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.817 [2024-11-20 10:31:31.148907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:27.817 [2024-11-20 10:31:31.148943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.817 [2024-11-20 10:31:31.151402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.817 [2024-11-20 10:31:31.151483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.817 pt2 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.817 [2024-11-20 10:31:31.160803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.817 [2024-11-20 10:31:31.162985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.817 [2024-11-20 10:31:31.163233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.817 [2024-11-20 10:31:31.163288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.817 [2024-11-20 10:31:31.163673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:27.817 [2024-11-20 10:31:31.163913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.817 [2024-11-20 10:31:31.163969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:27.817 [2024-11-20 10:31:31.164214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.817 "name": "raid_bdev1", 00:08:27.817 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:27.817 "strip_size_kb": 64, 00:08:27.817 "state": "online", 00:08:27.817 "raid_level": "concat", 00:08:27.817 "superblock": true, 00:08:27.817 "num_base_bdevs": 2, 00:08:27.817 "num_base_bdevs_discovered": 2, 00:08:27.817 "num_base_bdevs_operational": 2, 00:08:27.817 "base_bdevs_list": [ 00:08:27.817 { 00:08:27.817 "name": "pt1", 00:08:27.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.817 "is_configured": true, 00:08:27.817 "data_offset": 2048, 00:08:27.817 "data_size": 63488 00:08:27.817 }, 00:08:27.817 { 00:08:27.817 "name": "pt2", 00:08:27.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.817 "is_configured": true, 00:08:27.817 "data_offset": 2048, 00:08:27.817 "data_size": 63488 00:08:27.817 } 00:08:27.817 ] 00:08:27.817 }' 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.817 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.428 [2024-11-20 10:31:31.640350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.428 "name": "raid_bdev1", 00:08:28.428 "aliases": [ 00:08:28.428 "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7" 00:08:28.428 ], 00:08:28.428 "product_name": "Raid Volume", 00:08:28.428 "block_size": 512, 00:08:28.428 "num_blocks": 126976, 00:08:28.428 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:28.428 "assigned_rate_limits": { 00:08:28.428 "rw_ios_per_sec": 0, 00:08:28.428 "rw_mbytes_per_sec": 0, 00:08:28.428 "r_mbytes_per_sec": 0, 00:08:28.428 "w_mbytes_per_sec": 0 00:08:28.428 }, 00:08:28.428 "claimed": false, 00:08:28.428 "zoned": false, 00:08:28.428 "supported_io_types": { 00:08:28.428 "read": true, 00:08:28.428 "write": true, 00:08:28.428 "unmap": true, 00:08:28.428 "flush": true, 00:08:28.428 "reset": true, 00:08:28.428 "nvme_admin": false, 00:08:28.428 "nvme_io": false, 00:08:28.428 "nvme_io_md": false, 00:08:28.428 "write_zeroes": true, 00:08:28.428 "zcopy": false, 00:08:28.428 "get_zone_info": false, 00:08:28.428 "zone_management": false, 00:08:28.428 "zone_append": false, 00:08:28.428 "compare": false, 00:08:28.428 "compare_and_write": false, 00:08:28.428 "abort": false, 00:08:28.428 "seek_hole": false, 00:08:28.428 "seek_data": false, 00:08:28.428 "copy": false, 00:08:28.428 "nvme_iov_md": false 00:08:28.428 }, 00:08:28.428 "memory_domains": [ 00:08:28.428 { 00:08:28.428 "dma_device_id": "system", 00:08:28.428 "dma_device_type": 1 00:08:28.428 }, 00:08:28.428 { 00:08:28.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.428 "dma_device_type": 2 00:08:28.428 }, 00:08:28.428 { 00:08:28.428 "dma_device_id": "system", 00:08:28.428 "dma_device_type": 1 00:08:28.428 }, 00:08:28.428 { 00:08:28.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.428 "dma_device_type": 2 00:08:28.428 } 00:08:28.428 ], 00:08:28.428 "driver_specific": { 00:08:28.428 "raid": { 00:08:28.428 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:28.428 "strip_size_kb": 64, 00:08:28.428 "state": "online", 00:08:28.428 "raid_level": "concat", 00:08:28.428 "superblock": true, 00:08:28.428 "num_base_bdevs": 2, 00:08:28.428 "num_base_bdevs_discovered": 2, 00:08:28.428 "num_base_bdevs_operational": 2, 00:08:28.428 "base_bdevs_list": [ 00:08:28.428 { 00:08:28.428 "name": "pt1", 00:08:28.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.428 "is_configured": true, 00:08:28.428 "data_offset": 2048, 00:08:28.428 "data_size": 63488 00:08:28.428 }, 00:08:28.428 { 00:08:28.428 "name": "pt2", 00:08:28.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.428 "is_configured": true, 00:08:28.428 "data_offset": 2048, 00:08:28.428 "data_size": 63488 00:08:28.428 } 00:08:28.428 ] 00:08:28.428 } 00:08:28.428 } 00:08:28.428 }' 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.428 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.428 pt2' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:28.429 [2024-11-20 10:31:31.860138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.429 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7 ']' 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 [2024-11-20 10:31:31.911806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.688 [2024-11-20 10:31:31.911885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.688 [2024-11-20 10:31:31.912023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.688 [2024-11-20 10:31:31.912110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.688 [2024-11-20 10:31:31.912166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 [2024-11-20 10:31:32.051593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:28.688 [2024-11-20 10:31:32.053811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:28.688 [2024-11-20 10:31:32.053937] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:28.688 [2024-11-20 10:31:32.054035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:28.688 [2024-11-20 10:31:32.054056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.688 [2024-11-20 10:31:32.054068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:28.688 request: 00:08:28.688 { 00:08:28.688 "name": "raid_bdev1", 00:08:28.688 "raid_level": "concat", 00:08:28.688 "base_bdevs": [ 00:08:28.688 "malloc1", 00:08:28.688 "malloc2" 00:08:28.688 ], 00:08:28.688 "strip_size_kb": 64, 00:08:28.688 "superblock": false, 00:08:28.688 "method": "bdev_raid_create", 00:08:28.688 "req_id": 1 00:08:28.688 } 00:08:28.688 Got JSON-RPC error response 00:08:28.688 response: 00:08:28.688 { 00:08:28.688 "code": -17, 00:08:28.688 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:28.688 } 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 [2024-11-20 10:31:32.115520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:28.688 [2024-11-20 10:31:32.115646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.688 [2024-11-20 10:31:32.115701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:28.688 [2024-11-20 10:31:32.115737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.688 [2024-11-20 10:31:32.118279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.688 [2024-11-20 10:31:32.118377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:28.688 [2024-11-20 10:31:32.118512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:28.688 [2024-11-20 10:31:32.118622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:28.688 pt1 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.688 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.948 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.948 "name": "raid_bdev1", 00:08:28.948 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:28.948 "strip_size_kb": 64, 00:08:28.948 "state": "configuring", 00:08:28.948 "raid_level": "concat", 00:08:28.948 "superblock": true, 00:08:28.948 "num_base_bdevs": 2, 00:08:28.948 "num_base_bdevs_discovered": 1, 00:08:28.948 "num_base_bdevs_operational": 2, 00:08:28.948 "base_bdevs_list": [ 00:08:28.948 { 00:08:28.948 "name": "pt1", 00:08:28.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.948 "is_configured": true, 00:08:28.948 "data_offset": 2048, 00:08:28.948 "data_size": 63488 00:08:28.948 }, 00:08:28.948 { 00:08:28.948 "name": null, 00:08:28.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.948 "is_configured": false, 00:08:28.948 "data_offset": 2048, 00:08:28.948 "data_size": 63488 00:08:28.948 } 00:08:28.948 ] 00:08:28.948 }' 00:08:28.948 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.948 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.206 [2024-11-20 10:31:32.578802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.206 [2024-11-20 10:31:32.578886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.206 [2024-11-20 10:31:32.578910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:29.206 [2024-11-20 10:31:32.578922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.206 [2024-11-20 10:31:32.579423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.206 [2024-11-20 10:31:32.579446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.206 [2024-11-20 10:31:32.579540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.206 [2024-11-20 10:31:32.579573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.206 [2024-11-20 10:31:32.579749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.206 [2024-11-20 10:31:32.579763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:29.206 [2024-11-20 10:31:32.580033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.206 [2024-11-20 10:31:32.580209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.206 [2024-11-20 10:31:32.580221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:29.206 [2024-11-20 10:31:32.580414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.206 pt2 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.206 "name": "raid_bdev1", 00:08:29.206 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:29.206 "strip_size_kb": 64, 00:08:29.206 "state": "online", 00:08:29.206 "raid_level": "concat", 00:08:29.206 "superblock": true, 00:08:29.206 "num_base_bdevs": 2, 00:08:29.206 "num_base_bdevs_discovered": 2, 00:08:29.206 "num_base_bdevs_operational": 2, 00:08:29.206 "base_bdevs_list": [ 00:08:29.206 { 00:08:29.206 "name": "pt1", 00:08:29.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.206 "is_configured": true, 00:08:29.206 "data_offset": 2048, 00:08:29.206 "data_size": 63488 00:08:29.206 }, 00:08:29.206 { 00:08:29.206 "name": "pt2", 00:08:29.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.206 "is_configured": true, 00:08:29.206 "data_offset": 2048, 00:08:29.206 "data_size": 63488 00:08:29.206 } 00:08:29.206 ] 00:08:29.206 }' 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.206 10:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.773 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.774 [2024-11-20 10:31:33.062268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.774 "name": "raid_bdev1", 00:08:29.774 "aliases": [ 00:08:29.774 "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7" 00:08:29.774 ], 00:08:29.774 "product_name": "Raid Volume", 00:08:29.774 "block_size": 512, 00:08:29.774 "num_blocks": 126976, 00:08:29.774 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:29.774 "assigned_rate_limits": { 00:08:29.774 "rw_ios_per_sec": 0, 00:08:29.774 "rw_mbytes_per_sec": 0, 00:08:29.774 "r_mbytes_per_sec": 0, 00:08:29.774 "w_mbytes_per_sec": 0 00:08:29.774 }, 00:08:29.774 "claimed": false, 00:08:29.774 "zoned": false, 00:08:29.774 "supported_io_types": { 00:08:29.774 "read": true, 00:08:29.774 "write": true, 00:08:29.774 "unmap": true, 00:08:29.774 "flush": true, 00:08:29.774 "reset": true, 00:08:29.774 "nvme_admin": false, 00:08:29.774 "nvme_io": false, 00:08:29.774 "nvme_io_md": false, 00:08:29.774 "write_zeroes": true, 00:08:29.774 "zcopy": false, 00:08:29.774 "get_zone_info": false, 00:08:29.774 "zone_management": false, 00:08:29.774 "zone_append": false, 00:08:29.774 "compare": false, 00:08:29.774 "compare_and_write": false, 00:08:29.774 "abort": false, 00:08:29.774 "seek_hole": false, 00:08:29.774 "seek_data": false, 00:08:29.774 "copy": false, 00:08:29.774 "nvme_iov_md": false 00:08:29.774 }, 00:08:29.774 "memory_domains": [ 00:08:29.774 { 00:08:29.774 "dma_device_id": "system", 00:08:29.774 "dma_device_type": 1 00:08:29.774 }, 00:08:29.774 { 00:08:29.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.774 "dma_device_type": 2 00:08:29.774 }, 00:08:29.774 { 00:08:29.774 "dma_device_id": "system", 00:08:29.774 "dma_device_type": 1 00:08:29.774 }, 00:08:29.774 { 00:08:29.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.774 "dma_device_type": 2 00:08:29.774 } 00:08:29.774 ], 00:08:29.774 "driver_specific": { 00:08:29.774 "raid": { 00:08:29.774 "uuid": "6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7", 00:08:29.774 "strip_size_kb": 64, 00:08:29.774 "state": "online", 00:08:29.774 "raid_level": "concat", 00:08:29.774 "superblock": true, 00:08:29.774 "num_base_bdevs": 2, 00:08:29.774 "num_base_bdevs_discovered": 2, 00:08:29.774 "num_base_bdevs_operational": 2, 00:08:29.774 "base_bdevs_list": [ 00:08:29.774 { 00:08:29.774 "name": "pt1", 00:08:29.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.774 "is_configured": true, 00:08:29.774 "data_offset": 2048, 00:08:29.774 "data_size": 63488 00:08:29.774 }, 00:08:29.774 { 00:08:29.774 "name": "pt2", 00:08:29.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.774 "is_configured": true, 00:08:29.774 "data_offset": 2048, 00:08:29.774 "data_size": 63488 00:08:29.774 } 00:08:29.774 ] 00:08:29.774 } 00:08:29.774 } 00:08:29.774 }' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:29.774 pt2' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.774 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.033 [2024-11-20 10:31:33.301837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7 '!=' 6bd9a0b2-d7e6-48d6-9bbc-3b4b0a636bf7 ']' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62355 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62355 ']' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62355 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62355 00:08:30.033 killing process with pid 62355 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62355' 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62355 00:08:30.033 [2024-11-20 10:31:33.387235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.033 [2024-11-20 10:31:33.387349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.033 [2024-11-20 10:31:33.387422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.033 [2024-11-20 10:31:33.387436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:30.033 10:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62355 00:08:30.291 [2024-11-20 10:31:33.629407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.670 10:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:31.670 00:08:31.670 real 0m4.855s 00:08:31.670 user 0m6.825s 00:08:31.670 sys 0m0.783s 00:08:31.670 10:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.670 ************************************ 00:08:31.670 END TEST raid_superblock_test 00:08:31.670 ************************************ 00:08:31.670 10:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.670 10:31:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:31.670 10:31:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.670 10:31:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.670 10:31:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.670 ************************************ 00:08:31.670 START TEST raid_read_error_test 00:08:31.670 ************************************ 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.17PasyFitm 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62572 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62572 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62572 ']' 00:08:31.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.670 10:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.670 [2024-11-20 10:31:35.052795] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:31.670 [2024-11-20 10:31:35.052938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62572 ] 00:08:31.930 [2024-11-20 10:31:35.230531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.930 [2024-11-20 10:31:35.358551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.189 [2024-11-20 10:31:35.591329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.189 [2024-11-20 10:31:35.591385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.758 10:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.758 10:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.758 10:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.758 10:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.758 10:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 BaseBdev1_malloc 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 true 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 [2024-11-20 10:31:36.063246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.758 [2024-11-20 10:31:36.063308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.758 [2024-11-20 10:31:36.063348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.758 [2024-11-20 10:31:36.063360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.758 [2024-11-20 10:31:36.065696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.758 [2024-11-20 10:31:36.065739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.758 BaseBdev1 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 BaseBdev2_malloc 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 true 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 [2024-11-20 10:31:36.137129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.758 [2024-11-20 10:31:36.137196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.758 [2024-11-20 10:31:36.137236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.758 [2024-11-20 10:31:36.137249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.758 [2024-11-20 10:31:36.139777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.758 [2024-11-20 10:31:36.139826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.758 BaseBdev2 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 [2024-11-20 10:31:36.149207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.758 [2024-11-20 10:31:36.151431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.758 [2024-11-20 10:31:36.151718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.758 [2024-11-20 10:31:36.151740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:32.758 [2024-11-20 10:31:36.152059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:32.758 [2024-11-20 10:31:36.152273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.758 [2024-11-20 10:31:36.152287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.758 [2024-11-20 10:31:36.152515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.758 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.758 "name": "raid_bdev1", 00:08:32.758 "uuid": "33d78b87-92e4-4138-93f8-e535e29b497b", 00:08:32.758 "strip_size_kb": 64, 00:08:32.758 "state": "online", 00:08:32.758 "raid_level": "concat", 00:08:32.758 "superblock": true, 00:08:32.758 "num_base_bdevs": 2, 00:08:32.758 "num_base_bdevs_discovered": 2, 00:08:32.758 "num_base_bdevs_operational": 2, 00:08:32.758 "base_bdevs_list": [ 00:08:32.758 { 00:08:32.758 "name": "BaseBdev1", 00:08:32.758 "uuid": "0736dfbe-7e3f-5a2b-a881-5464295b5d19", 00:08:32.758 "is_configured": true, 00:08:32.758 "data_offset": 2048, 00:08:32.758 "data_size": 63488 00:08:32.758 }, 00:08:32.758 { 00:08:32.758 "name": "BaseBdev2", 00:08:32.758 "uuid": "6fdc6b38-798e-5095-8850-b37acf5e7ea9", 00:08:32.759 "is_configured": true, 00:08:32.759 "data_offset": 2048, 00:08:32.759 "data_size": 63488 00:08:32.759 } 00:08:32.759 ] 00:08:32.759 }' 00:08:32.759 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.759 10:31:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.326 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:33.326 10:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.326 [2024-11-20 10:31:36.773686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:34.261 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.262 "name": "raid_bdev1", 00:08:34.262 "uuid": "33d78b87-92e4-4138-93f8-e535e29b497b", 00:08:34.262 "strip_size_kb": 64, 00:08:34.262 "state": "online", 00:08:34.262 "raid_level": "concat", 00:08:34.262 "superblock": true, 00:08:34.262 "num_base_bdevs": 2, 00:08:34.262 "num_base_bdevs_discovered": 2, 00:08:34.262 "num_base_bdevs_operational": 2, 00:08:34.262 "base_bdevs_list": [ 00:08:34.262 { 00:08:34.262 "name": "BaseBdev1", 00:08:34.262 "uuid": "0736dfbe-7e3f-5a2b-a881-5464295b5d19", 00:08:34.262 "is_configured": true, 00:08:34.262 "data_offset": 2048, 00:08:34.262 "data_size": 63488 00:08:34.262 }, 00:08:34.262 { 00:08:34.262 "name": "BaseBdev2", 00:08:34.262 "uuid": "6fdc6b38-798e-5095-8850-b37acf5e7ea9", 00:08:34.262 "is_configured": true, 00:08:34.262 "data_offset": 2048, 00:08:34.262 "data_size": 63488 00:08:34.262 } 00:08:34.262 ] 00:08:34.262 }' 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.262 10:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 10:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.830 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.830 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 [2024-11-20 10:31:38.106018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.830 [2024-11-20 10:31:38.106056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.830 [2024-11-20 10:31:38.109152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.830 [2024-11-20 10:31:38.109242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.830 [2024-11-20 10:31:38.109297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.830 [2024-11-20 10:31:38.109349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.830 { 00:08:34.830 "results": [ 00:08:34.830 { 00:08:34.830 "job": "raid_bdev1", 00:08:34.830 "core_mask": "0x1", 00:08:34.830 "workload": "randrw", 00:08:34.830 "percentage": 50, 00:08:34.830 "status": "finished", 00:08:34.830 "queue_depth": 1, 00:08:34.830 "io_size": 131072, 00:08:34.830 "runtime": 1.3328, 00:08:34.830 "iops": 14722.388955582233, 00:08:34.830 "mibps": 1840.2986194477792, 00:08:34.830 "io_failed": 1, 00:08:34.830 "io_timeout": 0, 00:08:34.830 "avg_latency_us": 94.10150169115781, 00:08:34.830 "min_latency_us": 28.17117903930131, 00:08:34.831 "max_latency_us": 1523.926637554585 00:08:34.831 } 00:08:34.831 ], 00:08:34.831 "core_count": 1 00:08:34.831 } 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62572 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62572 ']' 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62572 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62572 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62572' 00:08:34.831 killing process with pid 62572 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62572 00:08:34.831 [2024-11-20 10:31:38.149720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.831 10:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62572 00:08:35.090 [2024-11-20 10:31:38.313087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.17PasyFitm 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:36.469 ************************************ 00:08:36.469 END TEST raid_read_error_test 00:08:36.469 ************************************ 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:36.469 00:08:36.469 real 0m4.652s 00:08:36.469 user 0m5.686s 00:08:36.469 sys 0m0.572s 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.469 10:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.469 10:31:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:36.469 10:31:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.469 10:31:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.469 10:31:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.469 ************************************ 00:08:36.469 START TEST raid_write_error_test 00:08:36.469 ************************************ 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.469 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mi0iasc5Dc 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62718 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62718 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62718 ']' 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.470 10:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 [2024-11-20 10:31:39.752225] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:36.470 [2024-11-20 10:31:39.752346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62718 ] 00:08:36.470 [2024-11-20 10:31:39.929224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.730 [2024-11-20 10:31:40.057085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.989 [2024-11-20 10:31:40.283345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.989 [2024-11-20 10:31:40.283419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 BaseBdev1_malloc 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 true 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.249 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 [2024-11-20 10:31:40.685379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:37.249 [2024-11-20 10:31:40.685439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.249 [2024-11-20 10:31:40.685463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:37.249 [2024-11-20 10:31:40.685476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.249 [2024-11-20 10:31:40.687933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.250 [2024-11-20 10:31:40.687979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:37.250 BaseBdev1 00:08:37.250 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.250 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.250 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:37.250 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.250 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 BaseBdev2_malloc 00:08:37.509 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 true 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 [2024-11-20 10:31:40.756994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:37.510 [2024-11-20 10:31:40.757126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.510 [2024-11-20 10:31:40.757154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:37.510 [2024-11-20 10:31:40.757168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.510 [2024-11-20 10:31:40.759583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.510 [2024-11-20 10:31:40.759626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:37.510 BaseBdev2 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 [2024-11-20 10:31:40.769048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.510 [2024-11-20 10:31:40.771040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.510 [2024-11-20 10:31:40.771280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.510 [2024-11-20 10:31:40.771295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:37.510 [2024-11-20 10:31:40.771614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:37.510 [2024-11-20 10:31:40.771882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.510 [2024-11-20 10:31:40.771903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:37.510 [2024-11-20 10:31:40.772098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.510 "name": "raid_bdev1", 00:08:37.510 "uuid": "bf3bf5e6-7595-435b-a2e4-d36d6e1aa631", 00:08:37.510 "strip_size_kb": 64, 00:08:37.510 "state": "online", 00:08:37.510 "raid_level": "concat", 00:08:37.510 "superblock": true, 00:08:37.510 "num_base_bdevs": 2, 00:08:37.510 "num_base_bdevs_discovered": 2, 00:08:37.510 "num_base_bdevs_operational": 2, 00:08:37.510 "base_bdevs_list": [ 00:08:37.510 { 00:08:37.510 "name": "BaseBdev1", 00:08:37.510 "uuid": "7fc6d11f-11f5-5619-8136-3abedaa42c0f", 00:08:37.510 "is_configured": true, 00:08:37.510 "data_offset": 2048, 00:08:37.510 "data_size": 63488 00:08:37.510 }, 00:08:37.510 { 00:08:37.510 "name": "BaseBdev2", 00:08:37.510 "uuid": "fb1b4065-b2d8-531a-b6da-98c4fec24065", 00:08:37.510 "is_configured": true, 00:08:37.510 "data_offset": 2048, 00:08:37.510 "data_size": 63488 00:08:37.510 } 00:08:37.510 ] 00:08:37.510 }' 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.510 10:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 10:31:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.770 10:31:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:38.029 [2024-11-20 10:31:41.329647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.967 "name": "raid_bdev1", 00:08:38.967 "uuid": "bf3bf5e6-7595-435b-a2e4-d36d6e1aa631", 00:08:38.967 "strip_size_kb": 64, 00:08:38.967 "state": "online", 00:08:38.967 "raid_level": "concat", 00:08:38.967 "superblock": true, 00:08:38.967 "num_base_bdevs": 2, 00:08:38.967 "num_base_bdevs_discovered": 2, 00:08:38.967 "num_base_bdevs_operational": 2, 00:08:38.967 "base_bdevs_list": [ 00:08:38.967 { 00:08:38.967 "name": "BaseBdev1", 00:08:38.967 "uuid": "7fc6d11f-11f5-5619-8136-3abedaa42c0f", 00:08:38.967 "is_configured": true, 00:08:38.967 "data_offset": 2048, 00:08:38.967 "data_size": 63488 00:08:38.967 }, 00:08:38.967 { 00:08:38.967 "name": "BaseBdev2", 00:08:38.967 "uuid": "fb1b4065-b2d8-531a-b6da-98c4fec24065", 00:08:38.967 "is_configured": true, 00:08:38.967 "data_offset": 2048, 00:08:38.967 "data_size": 63488 00:08:38.967 } 00:08:38.967 ] 00:08:38.967 }' 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.967 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.566 [2024-11-20 10:31:42.758782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.566 [2024-11-20 10:31:42.758827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.566 [2024-11-20 10:31:42.762000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.566 [2024-11-20 10:31:42.762054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.566 [2024-11-20 10:31:42.762091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.566 [2024-11-20 10:31:42.762112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:39.566 { 00:08:39.566 "results": [ 00:08:39.566 { 00:08:39.566 "job": "raid_bdev1", 00:08:39.566 "core_mask": "0x1", 00:08:39.566 "workload": "randrw", 00:08:39.566 "percentage": 50, 00:08:39.566 "status": "finished", 00:08:39.566 "queue_depth": 1, 00:08:39.566 "io_size": 131072, 00:08:39.566 "runtime": 1.429809, 00:08:39.566 "iops": 14361.358754910621, 00:08:39.566 "mibps": 1795.1698443638277, 00:08:39.566 "io_failed": 1, 00:08:39.566 "io_timeout": 0, 00:08:39.566 "avg_latency_us": 96.63562787146878, 00:08:39.566 "min_latency_us": 27.83580786026201, 00:08:39.566 "max_latency_us": 1802.955458515284 00:08:39.566 } 00:08:39.566 ], 00:08:39.566 "core_count": 1 00:08:39.566 } 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62718 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62718 ']' 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62718 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62718 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.566 killing process with pid 62718 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62718' 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62718 00:08:39.566 [2024-11-20 10:31:42.795279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.566 10:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62718 00:08:39.566 [2024-11-20 10:31:42.954604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mi0iasc5Dc 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:40.946 00:08:40.946 real 0m4.576s 00:08:40.946 user 0m5.538s 00:08:40.946 sys 0m0.537s 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.946 10:31:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.946 ************************************ 00:08:40.946 END TEST raid_write_error_test 00:08:40.946 ************************************ 00:08:40.946 10:31:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:40.946 10:31:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:40.946 10:31:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:40.946 10:31:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.946 10:31:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.946 ************************************ 00:08:40.946 START TEST raid_state_function_test 00:08:40.946 ************************************ 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62861 00:08:40.947 Process raid pid: 62861 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62861' 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62861 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62861 ']' 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.947 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 [2024-11-20 10:31:44.370242] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:40.947 [2024-11-20 10:31:44.370794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.206 [2024-11-20 10:31:44.548843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.206 [2024-11-20 10:31:44.666324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.465 [2024-11-20 10:31:44.878982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.465 [2024-11-20 10:31:44.879030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.034 [2024-11-20 10:31:45.260669] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.034 [2024-11-20 10:31:45.260725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.034 [2024-11-20 10:31:45.260737] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.034 [2024-11-20 10:31:45.260749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.034 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.035 "name": "Existed_Raid", 00:08:42.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.035 "strip_size_kb": 0, 00:08:42.035 "state": "configuring", 00:08:42.035 "raid_level": "raid1", 00:08:42.035 "superblock": false, 00:08:42.035 "num_base_bdevs": 2, 00:08:42.035 "num_base_bdevs_discovered": 0, 00:08:42.035 "num_base_bdevs_operational": 2, 00:08:42.035 "base_bdevs_list": [ 00:08:42.035 { 00:08:42.035 "name": "BaseBdev1", 00:08:42.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.035 "is_configured": false, 00:08:42.035 "data_offset": 0, 00:08:42.035 "data_size": 0 00:08:42.035 }, 00:08:42.035 { 00:08:42.035 "name": "BaseBdev2", 00:08:42.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.035 "is_configured": false, 00:08:42.035 "data_offset": 0, 00:08:42.035 "data_size": 0 00:08:42.035 } 00:08:42.035 ] 00:08:42.035 }' 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.035 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.294 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.295 [2024-11-20 10:31:45.719915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.295 [2024-11-20 10:31:45.719960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.295 [2024-11-20 10:31:45.727873] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.295 [2024-11-20 10:31:45.727920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.295 [2024-11-20 10:31:45.727930] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.295 [2024-11-20 10:31:45.727942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.295 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.554 [2024-11-20 10:31:45.774501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.554 BaseBdev1 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.554 [ 00:08:42.554 { 00:08:42.554 "name": "BaseBdev1", 00:08:42.554 "aliases": [ 00:08:42.554 "defbad5b-ba24-4145-9b84-c514759a58a8" 00:08:42.554 ], 00:08:42.554 "product_name": "Malloc disk", 00:08:42.554 "block_size": 512, 00:08:42.554 "num_blocks": 65536, 00:08:42.554 "uuid": "defbad5b-ba24-4145-9b84-c514759a58a8", 00:08:42.554 "assigned_rate_limits": { 00:08:42.554 "rw_ios_per_sec": 0, 00:08:42.554 "rw_mbytes_per_sec": 0, 00:08:42.554 "r_mbytes_per_sec": 0, 00:08:42.554 "w_mbytes_per_sec": 0 00:08:42.554 }, 00:08:42.554 "claimed": true, 00:08:42.554 "claim_type": "exclusive_write", 00:08:42.554 "zoned": false, 00:08:42.554 "supported_io_types": { 00:08:42.554 "read": true, 00:08:42.554 "write": true, 00:08:42.554 "unmap": true, 00:08:42.554 "flush": true, 00:08:42.554 "reset": true, 00:08:42.554 "nvme_admin": false, 00:08:42.554 "nvme_io": false, 00:08:42.554 "nvme_io_md": false, 00:08:42.554 "write_zeroes": true, 00:08:42.554 "zcopy": true, 00:08:42.554 "get_zone_info": false, 00:08:42.554 "zone_management": false, 00:08:42.554 "zone_append": false, 00:08:42.554 "compare": false, 00:08:42.554 "compare_and_write": false, 00:08:42.554 "abort": true, 00:08:42.554 "seek_hole": false, 00:08:42.554 "seek_data": false, 00:08:42.554 "copy": true, 00:08:42.554 "nvme_iov_md": false 00:08:42.554 }, 00:08:42.554 "memory_domains": [ 00:08:42.554 { 00:08:42.554 "dma_device_id": "system", 00:08:42.554 "dma_device_type": 1 00:08:42.554 }, 00:08:42.554 { 00:08:42.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.554 "dma_device_type": 2 00:08:42.554 } 00:08:42.554 ], 00:08:42.554 "driver_specific": {} 00:08:42.554 } 00:08:42.554 ] 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.554 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.555 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.555 "name": "Existed_Raid", 00:08:42.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.555 "strip_size_kb": 0, 00:08:42.555 "state": "configuring", 00:08:42.555 "raid_level": "raid1", 00:08:42.555 "superblock": false, 00:08:42.555 "num_base_bdevs": 2, 00:08:42.555 "num_base_bdevs_discovered": 1, 00:08:42.555 "num_base_bdevs_operational": 2, 00:08:42.555 "base_bdevs_list": [ 00:08:42.555 { 00:08:42.555 "name": "BaseBdev1", 00:08:42.555 "uuid": "defbad5b-ba24-4145-9b84-c514759a58a8", 00:08:42.555 "is_configured": true, 00:08:42.555 "data_offset": 0, 00:08:42.555 "data_size": 65536 00:08:42.555 }, 00:08:42.555 { 00:08:42.555 "name": "BaseBdev2", 00:08:42.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.555 "is_configured": false, 00:08:42.555 "data_offset": 0, 00:08:42.555 "data_size": 0 00:08:42.555 } 00:08:42.555 ] 00:08:42.555 }' 00:08:42.555 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.555 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.814 [2024-11-20 10:31:46.273765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.814 [2024-11-20 10:31:46.273845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.814 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.814 [2024-11-20 10:31:46.281800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.814 [2024-11-20 10:31:46.283983] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.814 [2024-11-20 10:31:46.284032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.815 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.074 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.074 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.074 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.074 "name": "Existed_Raid", 00:08:43.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.074 "strip_size_kb": 0, 00:08:43.074 "state": "configuring", 00:08:43.074 "raid_level": "raid1", 00:08:43.074 "superblock": false, 00:08:43.074 "num_base_bdevs": 2, 00:08:43.074 "num_base_bdevs_discovered": 1, 00:08:43.074 "num_base_bdevs_operational": 2, 00:08:43.074 "base_bdevs_list": [ 00:08:43.074 { 00:08:43.074 "name": "BaseBdev1", 00:08:43.074 "uuid": "defbad5b-ba24-4145-9b84-c514759a58a8", 00:08:43.074 "is_configured": true, 00:08:43.074 "data_offset": 0, 00:08:43.074 "data_size": 65536 00:08:43.074 }, 00:08:43.074 { 00:08:43.074 "name": "BaseBdev2", 00:08:43.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.074 "is_configured": false, 00:08:43.074 "data_offset": 0, 00:08:43.074 "data_size": 0 00:08:43.074 } 00:08:43.074 ] 00:08:43.074 }' 00:08:43.074 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.074 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.333 [2024-11-20 10:31:46.779836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.333 [2024-11-20 10:31:46.779900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.333 [2024-11-20 10:31:46.779908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:43.333 [2024-11-20 10:31:46.780178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.333 [2024-11-20 10:31:46.780364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.333 [2024-11-20 10:31:46.780418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:43.333 [2024-11-20 10:31:46.780731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.333 BaseBdev2 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.333 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.334 [ 00:08:43.334 { 00:08:43.334 "name": "BaseBdev2", 00:08:43.334 "aliases": [ 00:08:43.334 "314d1170-4f4a-45a8-ab02-5c066d2d8a19" 00:08:43.334 ], 00:08:43.334 "product_name": "Malloc disk", 00:08:43.334 "block_size": 512, 00:08:43.334 "num_blocks": 65536, 00:08:43.334 "uuid": "314d1170-4f4a-45a8-ab02-5c066d2d8a19", 00:08:43.334 "assigned_rate_limits": { 00:08:43.334 "rw_ios_per_sec": 0, 00:08:43.334 "rw_mbytes_per_sec": 0, 00:08:43.334 "r_mbytes_per_sec": 0, 00:08:43.334 "w_mbytes_per_sec": 0 00:08:43.334 }, 00:08:43.334 "claimed": true, 00:08:43.334 "claim_type": "exclusive_write", 00:08:43.334 "zoned": false, 00:08:43.334 "supported_io_types": { 00:08:43.334 "read": true, 00:08:43.334 "write": true, 00:08:43.334 "unmap": true, 00:08:43.334 "flush": true, 00:08:43.334 "reset": true, 00:08:43.334 "nvme_admin": false, 00:08:43.334 "nvme_io": false, 00:08:43.334 "nvme_io_md": false, 00:08:43.334 "write_zeroes": true, 00:08:43.334 "zcopy": true, 00:08:43.334 "get_zone_info": false, 00:08:43.334 "zone_management": false, 00:08:43.334 "zone_append": false, 00:08:43.334 "compare": false, 00:08:43.334 "compare_and_write": false, 00:08:43.334 "abort": true, 00:08:43.334 "seek_hole": false, 00:08:43.334 "seek_data": false, 00:08:43.334 "copy": true, 00:08:43.334 "nvme_iov_md": false 00:08:43.334 }, 00:08:43.334 "memory_domains": [ 00:08:43.334 { 00:08:43.334 "dma_device_id": "system", 00:08:43.334 "dma_device_type": 1 00:08:43.334 }, 00:08:43.334 { 00:08:43.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.334 "dma_device_type": 2 00:08:43.334 } 00:08:43.334 ], 00:08:43.334 "driver_specific": {} 00:08:43.334 } 00:08:43.334 ] 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.334 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.594 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.594 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.594 "name": "Existed_Raid", 00:08:43.594 "uuid": "0a0770bd-9d20-4aa5-9bd8-f03ae34dd4ea", 00:08:43.594 "strip_size_kb": 0, 00:08:43.594 "state": "online", 00:08:43.594 "raid_level": "raid1", 00:08:43.594 "superblock": false, 00:08:43.594 "num_base_bdevs": 2, 00:08:43.594 "num_base_bdevs_discovered": 2, 00:08:43.594 "num_base_bdevs_operational": 2, 00:08:43.594 "base_bdevs_list": [ 00:08:43.594 { 00:08:43.594 "name": "BaseBdev1", 00:08:43.594 "uuid": "defbad5b-ba24-4145-9b84-c514759a58a8", 00:08:43.594 "is_configured": true, 00:08:43.594 "data_offset": 0, 00:08:43.594 "data_size": 65536 00:08:43.594 }, 00:08:43.594 { 00:08:43.594 "name": "BaseBdev2", 00:08:43.594 "uuid": "314d1170-4f4a-45a8-ab02-5c066d2d8a19", 00:08:43.594 "is_configured": true, 00:08:43.594 "data_offset": 0, 00:08:43.594 "data_size": 65536 00:08:43.594 } 00:08:43.594 ] 00:08:43.594 }' 00:08:43.594 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.594 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.854 [2024-11-20 10:31:47.203596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.854 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.854 "name": "Existed_Raid", 00:08:43.854 "aliases": [ 00:08:43.854 "0a0770bd-9d20-4aa5-9bd8-f03ae34dd4ea" 00:08:43.854 ], 00:08:43.854 "product_name": "Raid Volume", 00:08:43.854 "block_size": 512, 00:08:43.854 "num_blocks": 65536, 00:08:43.854 "uuid": "0a0770bd-9d20-4aa5-9bd8-f03ae34dd4ea", 00:08:43.854 "assigned_rate_limits": { 00:08:43.854 "rw_ios_per_sec": 0, 00:08:43.854 "rw_mbytes_per_sec": 0, 00:08:43.854 "r_mbytes_per_sec": 0, 00:08:43.854 "w_mbytes_per_sec": 0 00:08:43.854 }, 00:08:43.854 "claimed": false, 00:08:43.854 "zoned": false, 00:08:43.854 "supported_io_types": { 00:08:43.854 "read": true, 00:08:43.854 "write": true, 00:08:43.854 "unmap": false, 00:08:43.854 "flush": false, 00:08:43.854 "reset": true, 00:08:43.854 "nvme_admin": false, 00:08:43.854 "nvme_io": false, 00:08:43.854 "nvme_io_md": false, 00:08:43.854 "write_zeroes": true, 00:08:43.854 "zcopy": false, 00:08:43.854 "get_zone_info": false, 00:08:43.854 "zone_management": false, 00:08:43.854 "zone_append": false, 00:08:43.854 "compare": false, 00:08:43.854 "compare_and_write": false, 00:08:43.855 "abort": false, 00:08:43.855 "seek_hole": false, 00:08:43.855 "seek_data": false, 00:08:43.855 "copy": false, 00:08:43.855 "nvme_iov_md": false 00:08:43.855 }, 00:08:43.855 "memory_domains": [ 00:08:43.855 { 00:08:43.855 "dma_device_id": "system", 00:08:43.855 "dma_device_type": 1 00:08:43.855 }, 00:08:43.855 { 00:08:43.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.855 "dma_device_type": 2 00:08:43.855 }, 00:08:43.855 { 00:08:43.855 "dma_device_id": "system", 00:08:43.855 "dma_device_type": 1 00:08:43.855 }, 00:08:43.855 { 00:08:43.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.855 "dma_device_type": 2 00:08:43.855 } 00:08:43.855 ], 00:08:43.855 "driver_specific": { 00:08:43.855 "raid": { 00:08:43.855 "uuid": "0a0770bd-9d20-4aa5-9bd8-f03ae34dd4ea", 00:08:43.855 "strip_size_kb": 0, 00:08:43.855 "state": "online", 00:08:43.855 "raid_level": "raid1", 00:08:43.855 "superblock": false, 00:08:43.855 "num_base_bdevs": 2, 00:08:43.855 "num_base_bdevs_discovered": 2, 00:08:43.855 "num_base_bdevs_operational": 2, 00:08:43.855 "base_bdevs_list": [ 00:08:43.855 { 00:08:43.855 "name": "BaseBdev1", 00:08:43.855 "uuid": "defbad5b-ba24-4145-9b84-c514759a58a8", 00:08:43.855 "is_configured": true, 00:08:43.855 "data_offset": 0, 00:08:43.855 "data_size": 65536 00:08:43.855 }, 00:08:43.855 { 00:08:43.855 "name": "BaseBdev2", 00:08:43.855 "uuid": "314d1170-4f4a-45a8-ab02-5c066d2d8a19", 00:08:43.855 "is_configured": true, 00:08:43.855 "data_offset": 0, 00:08:43.855 "data_size": 65536 00:08:43.855 } 00:08:43.855 ] 00:08:43.855 } 00:08:43.855 } 00:08:43.855 }' 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:43.855 BaseBdev2' 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.855 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 [2024-11-20 10:31:47.410970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.116 "name": "Existed_Raid", 00:08:44.116 "uuid": "0a0770bd-9d20-4aa5-9bd8-f03ae34dd4ea", 00:08:44.116 "strip_size_kb": 0, 00:08:44.116 "state": "online", 00:08:44.116 "raid_level": "raid1", 00:08:44.116 "superblock": false, 00:08:44.116 "num_base_bdevs": 2, 00:08:44.116 "num_base_bdevs_discovered": 1, 00:08:44.116 "num_base_bdevs_operational": 1, 00:08:44.116 "base_bdevs_list": [ 00:08:44.116 { 00:08:44.116 "name": null, 00:08:44.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.116 "is_configured": false, 00:08:44.116 "data_offset": 0, 00:08:44.116 "data_size": 65536 00:08:44.116 }, 00:08:44.116 { 00:08:44.116 "name": "BaseBdev2", 00:08:44.116 "uuid": "314d1170-4f4a-45a8-ab02-5c066d2d8a19", 00:08:44.116 "is_configured": true, 00:08:44.116 "data_offset": 0, 00:08:44.116 "data_size": 65536 00:08:44.116 } 00:08:44.116 ] 00:08:44.116 }' 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.116 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.685 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.685 [2024-11-20 10:31:47.983856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:44.685 [2024-11-20 10:31:47.984030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.685 [2024-11-20 10:31:48.098164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.685 [2024-11-20 10:31:48.098322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.685 [2024-11-20 10:31:48.098415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62861 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62861 ']' 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62861 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.685 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62861 00:08:44.945 killing process with pid 62861 00:08:44.945 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.945 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.945 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62861' 00:08:44.945 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62861 00:08:44.945 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62861 00:08:44.945 [2024-11-20 10:31:48.187076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.945 [2024-11-20 10:31:48.207064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.324 00:08:46.324 real 0m5.156s 00:08:46.324 user 0m7.436s 00:08:46.324 sys 0m0.745s 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.324 ************************************ 00:08:46.324 END TEST raid_state_function_test 00:08:46.324 ************************************ 00:08:46.324 10:31:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:46.324 10:31:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.324 10:31:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.324 10:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.324 ************************************ 00:08:46.324 START TEST raid_state_function_test_sb 00:08:46.324 ************************************ 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.324 Process raid pid: 63111 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:46.324 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63111 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63111' 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63111 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63111 ']' 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.325 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.325 [2024-11-20 10:31:49.611463] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:46.325 [2024-11-20 10:31:49.611690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.325 [2024-11-20 10:31:49.787686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.584 [2024-11-20 10:31:49.917521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.843 [2024-11-20 10:31:50.148921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.843 [2024-11-20 10:31:50.148965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 [2024-11-20 10:31:50.520004] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.103 [2024-11-20 10:31:50.520106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.103 [2024-11-20 10:31:50.520123] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.103 [2024-11-20 10:31:50.520135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.103 "name": "Existed_Raid", 00:08:47.103 "uuid": "25239892-8295-450e-b54a-add6d0a4b902", 00:08:47.103 "strip_size_kb": 0, 00:08:47.103 "state": "configuring", 00:08:47.103 "raid_level": "raid1", 00:08:47.103 "superblock": true, 00:08:47.103 "num_base_bdevs": 2, 00:08:47.103 "num_base_bdevs_discovered": 0, 00:08:47.103 "num_base_bdevs_operational": 2, 00:08:47.103 "base_bdevs_list": [ 00:08:47.103 { 00:08:47.103 "name": "BaseBdev1", 00:08:47.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.103 "is_configured": false, 00:08:47.103 "data_offset": 0, 00:08:47.103 "data_size": 0 00:08:47.103 }, 00:08:47.103 { 00:08:47.103 "name": "BaseBdev2", 00:08:47.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.103 "is_configured": false, 00:08:47.103 "data_offset": 0, 00:08:47.103 "data_size": 0 00:08:47.103 } 00:08:47.103 ] 00:08:47.103 }' 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.103 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.672 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.672 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 [2024-11-20 10:31:50.999170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.672 [2024-11-20 10:31:50.999255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 [2024-11-20 10:31:51.007139] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.672 [2024-11-20 10:31:51.007224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.672 [2024-11-20 10:31:51.007259] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.672 [2024-11-20 10:31:51.007288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 [2024-11-20 10:31:51.053230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.672 BaseBdev1 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 [ 00:08:47.672 { 00:08:47.672 "name": "BaseBdev1", 00:08:47.672 "aliases": [ 00:08:47.672 "43cd3a32-cbf6-4846-a1a7-885c3ab8ffb4" 00:08:47.672 ], 00:08:47.672 "product_name": "Malloc disk", 00:08:47.672 "block_size": 512, 00:08:47.672 "num_blocks": 65536, 00:08:47.672 "uuid": "43cd3a32-cbf6-4846-a1a7-885c3ab8ffb4", 00:08:47.672 "assigned_rate_limits": { 00:08:47.672 "rw_ios_per_sec": 0, 00:08:47.672 "rw_mbytes_per_sec": 0, 00:08:47.672 "r_mbytes_per_sec": 0, 00:08:47.672 "w_mbytes_per_sec": 0 00:08:47.672 }, 00:08:47.672 "claimed": true, 00:08:47.672 "claim_type": "exclusive_write", 00:08:47.672 "zoned": false, 00:08:47.672 "supported_io_types": { 00:08:47.672 "read": true, 00:08:47.672 "write": true, 00:08:47.672 "unmap": true, 00:08:47.672 "flush": true, 00:08:47.672 "reset": true, 00:08:47.672 "nvme_admin": false, 00:08:47.672 "nvme_io": false, 00:08:47.672 "nvme_io_md": false, 00:08:47.672 "write_zeroes": true, 00:08:47.672 "zcopy": true, 00:08:47.672 "get_zone_info": false, 00:08:47.672 "zone_management": false, 00:08:47.672 "zone_append": false, 00:08:47.672 "compare": false, 00:08:47.672 "compare_and_write": false, 00:08:47.672 "abort": true, 00:08:47.672 "seek_hole": false, 00:08:47.672 "seek_data": false, 00:08:47.672 "copy": true, 00:08:47.672 "nvme_iov_md": false 00:08:47.672 }, 00:08:47.672 "memory_domains": [ 00:08:47.672 { 00:08:47.672 "dma_device_id": "system", 00:08:47.672 "dma_device_type": 1 00:08:47.672 }, 00:08:47.672 { 00:08:47.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.672 "dma_device_type": 2 00:08:47.672 } 00:08:47.672 ], 00:08:47.672 "driver_specific": {} 00:08:47.672 } 00:08:47.672 ] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.672 "name": "Existed_Raid", 00:08:47.672 "uuid": "b16325ed-8dd1-4738-acf6-1ccfcbf50c7f", 00:08:47.672 "strip_size_kb": 0, 00:08:47.672 "state": "configuring", 00:08:47.672 "raid_level": "raid1", 00:08:47.672 "superblock": true, 00:08:47.672 "num_base_bdevs": 2, 00:08:47.672 "num_base_bdevs_discovered": 1, 00:08:47.672 "num_base_bdevs_operational": 2, 00:08:47.672 "base_bdevs_list": [ 00:08:47.672 { 00:08:47.672 "name": "BaseBdev1", 00:08:47.672 "uuid": "43cd3a32-cbf6-4846-a1a7-885c3ab8ffb4", 00:08:47.672 "is_configured": true, 00:08:47.672 "data_offset": 2048, 00:08:47.672 "data_size": 63488 00:08:47.672 }, 00:08:47.672 { 00:08:47.672 "name": "BaseBdev2", 00:08:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.672 "is_configured": false, 00:08:47.672 "data_offset": 0, 00:08:47.672 "data_size": 0 00:08:47.672 } 00:08:47.672 ] 00:08:47.672 }' 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.672 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.240 [2024-11-20 10:31:51.572434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.240 [2024-11-20 10:31:51.572492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.240 [2024-11-20 10:31:51.580475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.240 [2024-11-20 10:31:51.582632] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.240 [2024-11-20 10:31:51.582679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:48.240 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.241 "name": "Existed_Raid", 00:08:48.241 "uuid": "64b0cfb6-412b-4897-95dc-0a51ce38ae90", 00:08:48.241 "strip_size_kb": 0, 00:08:48.241 "state": "configuring", 00:08:48.241 "raid_level": "raid1", 00:08:48.241 "superblock": true, 00:08:48.241 "num_base_bdevs": 2, 00:08:48.241 "num_base_bdevs_discovered": 1, 00:08:48.241 "num_base_bdevs_operational": 2, 00:08:48.241 "base_bdevs_list": [ 00:08:48.241 { 00:08:48.241 "name": "BaseBdev1", 00:08:48.241 "uuid": "43cd3a32-cbf6-4846-a1a7-885c3ab8ffb4", 00:08:48.241 "is_configured": true, 00:08:48.241 "data_offset": 2048, 00:08:48.241 "data_size": 63488 00:08:48.241 }, 00:08:48.241 { 00:08:48.241 "name": "BaseBdev2", 00:08:48.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.241 "is_configured": false, 00:08:48.241 "data_offset": 0, 00:08:48.241 "data_size": 0 00:08:48.241 } 00:08:48.241 ] 00:08:48.241 }' 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.241 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.814 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.814 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.814 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.814 [2024-11-20 10:31:52.063101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.814 [2024-11-20 10:31:52.063655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.814 [2024-11-20 10:31:52.063709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.814 [2024-11-20 10:31:52.064086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.814 BaseBdev2 00:08:48.814 [2024-11-20 10:31:52.064330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.814 [2024-11-20 10:31:52.064373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.814 [2024-11-20 10:31:52.064560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.814 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.815 [ 00:08:48.815 { 00:08:48.815 "name": "BaseBdev2", 00:08:48.815 "aliases": [ 00:08:48.815 "3cddd136-aebb-49a8-b818-ab5e4341fe0c" 00:08:48.815 ], 00:08:48.815 "product_name": "Malloc disk", 00:08:48.815 "block_size": 512, 00:08:48.815 "num_blocks": 65536, 00:08:48.815 "uuid": "3cddd136-aebb-49a8-b818-ab5e4341fe0c", 00:08:48.815 "assigned_rate_limits": { 00:08:48.815 "rw_ios_per_sec": 0, 00:08:48.815 "rw_mbytes_per_sec": 0, 00:08:48.815 "r_mbytes_per_sec": 0, 00:08:48.815 "w_mbytes_per_sec": 0 00:08:48.815 }, 00:08:48.815 "claimed": true, 00:08:48.815 "claim_type": "exclusive_write", 00:08:48.815 "zoned": false, 00:08:48.815 "supported_io_types": { 00:08:48.815 "read": true, 00:08:48.815 "write": true, 00:08:48.815 "unmap": true, 00:08:48.815 "flush": true, 00:08:48.815 "reset": true, 00:08:48.815 "nvme_admin": false, 00:08:48.815 "nvme_io": false, 00:08:48.815 "nvme_io_md": false, 00:08:48.815 "write_zeroes": true, 00:08:48.815 "zcopy": true, 00:08:48.815 "get_zone_info": false, 00:08:48.815 "zone_management": false, 00:08:48.815 "zone_append": false, 00:08:48.815 "compare": false, 00:08:48.815 "compare_and_write": false, 00:08:48.815 "abort": true, 00:08:48.815 "seek_hole": false, 00:08:48.815 "seek_data": false, 00:08:48.815 "copy": true, 00:08:48.815 "nvme_iov_md": false 00:08:48.815 }, 00:08:48.815 "memory_domains": [ 00:08:48.815 { 00:08:48.815 "dma_device_id": "system", 00:08:48.815 "dma_device_type": 1 00:08:48.815 }, 00:08:48.815 { 00:08:48.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.815 "dma_device_type": 2 00:08:48.815 } 00:08:48.815 ], 00:08:48.815 "driver_specific": {} 00:08:48.815 } 00:08:48.815 ] 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.815 "name": "Existed_Raid", 00:08:48.815 "uuid": "64b0cfb6-412b-4897-95dc-0a51ce38ae90", 00:08:48.815 "strip_size_kb": 0, 00:08:48.815 "state": "online", 00:08:48.815 "raid_level": "raid1", 00:08:48.815 "superblock": true, 00:08:48.815 "num_base_bdevs": 2, 00:08:48.815 "num_base_bdevs_discovered": 2, 00:08:48.815 "num_base_bdevs_operational": 2, 00:08:48.815 "base_bdevs_list": [ 00:08:48.815 { 00:08:48.815 "name": "BaseBdev1", 00:08:48.815 "uuid": "43cd3a32-cbf6-4846-a1a7-885c3ab8ffb4", 00:08:48.815 "is_configured": true, 00:08:48.815 "data_offset": 2048, 00:08:48.815 "data_size": 63488 00:08:48.815 }, 00:08:48.815 { 00:08:48.815 "name": "BaseBdev2", 00:08:48.815 "uuid": "3cddd136-aebb-49a8-b818-ab5e4341fe0c", 00:08:48.815 "is_configured": true, 00:08:48.815 "data_offset": 2048, 00:08:48.815 "data_size": 63488 00:08:48.815 } 00:08:48.815 ] 00:08:48.815 }' 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.815 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.075 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.075 [2024-11-20 10:31:52.538831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.335 "name": "Existed_Raid", 00:08:49.335 "aliases": [ 00:08:49.335 "64b0cfb6-412b-4897-95dc-0a51ce38ae90" 00:08:49.335 ], 00:08:49.335 "product_name": "Raid Volume", 00:08:49.335 "block_size": 512, 00:08:49.335 "num_blocks": 63488, 00:08:49.335 "uuid": "64b0cfb6-412b-4897-95dc-0a51ce38ae90", 00:08:49.335 "assigned_rate_limits": { 00:08:49.335 "rw_ios_per_sec": 0, 00:08:49.335 "rw_mbytes_per_sec": 0, 00:08:49.335 "r_mbytes_per_sec": 0, 00:08:49.335 "w_mbytes_per_sec": 0 00:08:49.335 }, 00:08:49.335 "claimed": false, 00:08:49.335 "zoned": false, 00:08:49.335 "supported_io_types": { 00:08:49.335 "read": true, 00:08:49.335 "write": true, 00:08:49.335 "unmap": false, 00:08:49.335 "flush": false, 00:08:49.335 "reset": true, 00:08:49.335 "nvme_admin": false, 00:08:49.335 "nvme_io": false, 00:08:49.335 "nvme_io_md": false, 00:08:49.335 "write_zeroes": true, 00:08:49.335 "zcopy": false, 00:08:49.335 "get_zone_info": false, 00:08:49.335 "zone_management": false, 00:08:49.335 "zone_append": false, 00:08:49.335 "compare": false, 00:08:49.335 "compare_and_write": false, 00:08:49.335 "abort": false, 00:08:49.335 "seek_hole": false, 00:08:49.335 "seek_data": false, 00:08:49.335 "copy": false, 00:08:49.335 "nvme_iov_md": false 00:08:49.335 }, 00:08:49.335 "memory_domains": [ 00:08:49.335 { 00:08:49.335 "dma_device_id": "system", 00:08:49.335 "dma_device_type": 1 00:08:49.335 }, 00:08:49.335 { 00:08:49.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.335 "dma_device_type": 2 00:08:49.335 }, 00:08:49.335 { 00:08:49.335 "dma_device_id": "system", 00:08:49.335 "dma_device_type": 1 00:08:49.335 }, 00:08:49.335 { 00:08:49.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.335 "dma_device_type": 2 00:08:49.335 } 00:08:49.335 ], 00:08:49.335 "driver_specific": { 00:08:49.335 "raid": { 00:08:49.335 "uuid": "64b0cfb6-412b-4897-95dc-0a51ce38ae90", 00:08:49.335 "strip_size_kb": 0, 00:08:49.335 "state": "online", 00:08:49.335 "raid_level": "raid1", 00:08:49.335 "superblock": true, 00:08:49.335 "num_base_bdevs": 2, 00:08:49.335 "num_base_bdevs_discovered": 2, 00:08:49.335 "num_base_bdevs_operational": 2, 00:08:49.335 "base_bdevs_list": [ 00:08:49.335 { 00:08:49.335 "name": "BaseBdev1", 00:08:49.335 "uuid": "43cd3a32-cbf6-4846-a1a7-885c3ab8ffb4", 00:08:49.335 "is_configured": true, 00:08:49.335 "data_offset": 2048, 00:08:49.335 "data_size": 63488 00:08:49.335 }, 00:08:49.335 { 00:08:49.335 "name": "BaseBdev2", 00:08:49.335 "uuid": "3cddd136-aebb-49a8-b818-ab5e4341fe0c", 00:08:49.335 "is_configured": true, 00:08:49.335 "data_offset": 2048, 00:08:49.335 "data_size": 63488 00:08:49.335 } 00:08:49.335 ] 00:08:49.335 } 00:08:49.335 } 00:08:49.335 }' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.335 BaseBdev2' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.335 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.335 [2024-11-20 10:31:52.722137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.595 "name": "Existed_Raid", 00:08:49.595 "uuid": "64b0cfb6-412b-4897-95dc-0a51ce38ae90", 00:08:49.595 "strip_size_kb": 0, 00:08:49.595 "state": "online", 00:08:49.595 "raid_level": "raid1", 00:08:49.595 "superblock": true, 00:08:49.595 "num_base_bdevs": 2, 00:08:49.595 "num_base_bdevs_discovered": 1, 00:08:49.595 "num_base_bdevs_operational": 1, 00:08:49.595 "base_bdevs_list": [ 00:08:49.595 { 00:08:49.595 "name": null, 00:08:49.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.595 "is_configured": false, 00:08:49.595 "data_offset": 0, 00:08:49.595 "data_size": 63488 00:08:49.595 }, 00:08:49.595 { 00:08:49.595 "name": "BaseBdev2", 00:08:49.595 "uuid": "3cddd136-aebb-49a8-b818-ab5e4341fe0c", 00:08:49.595 "is_configured": true, 00:08:49.595 "data_offset": 2048, 00:08:49.595 "data_size": 63488 00:08:49.595 } 00:08:49.595 ] 00:08:49.595 }' 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.595 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.855 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.855 [2024-11-20 10:31:53.330419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.855 [2024-11-20 10:31:53.330576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.114 [2024-11-20 10:31:53.440826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.114 [2024-11-20 10:31:53.440953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.114 [2024-11-20 10:31:53.440977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63111 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63111 ']' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63111 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63111 00:08:50.114 killing process with pid 63111 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63111' 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63111 00:08:50.114 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63111 00:08:50.114 [2024-11-20 10:31:53.531756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.114 [2024-11-20 10:31:53.552471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.491 ************************************ 00:08:51.491 END TEST raid_state_function_test_sb 00:08:51.491 ************************************ 00:08:51.491 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.492 00:08:51.492 real 0m5.355s 00:08:51.492 user 0m7.674s 00:08:51.492 sys 0m0.803s 00:08:51.492 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.492 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.492 10:31:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:51.492 10:31:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:51.492 10:31:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.492 10:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.492 ************************************ 00:08:51.492 START TEST raid_superblock_test 00:08:51.492 ************************************ 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63367 00:08:51.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63367 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63367 ']' 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:51.492 10:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.751 [2024-11-20 10:31:54.997997] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:51.751 [2024-11-20 10:31:54.998109] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63367 ] 00:08:51.751 [2024-11-20 10:31:55.155585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.009 [2024-11-20 10:31:55.305713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.268 [2024-11-20 10:31:55.565440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.268 [2024-11-20 10:31:55.565525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.527 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.527 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.528 malloc1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.528 [2024-11-20 10:31:55.913097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:52.528 [2024-11-20 10:31:55.913184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.528 [2024-11-20 10:31:55.913212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:52.528 [2024-11-20 10:31:55.913222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.528 [2024-11-20 10:31:55.915811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.528 [2024-11-20 10:31:55.915851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:52.528 pt1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.528 malloc2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.528 [2024-11-20 10:31:55.978438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.528 [2024-11-20 10:31:55.978518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.528 [2024-11-20 10:31:55.978545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:52.528 [2024-11-20 10:31:55.978556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.528 [2024-11-20 10:31:55.981286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.528 pt2 00:08:52.528 [2024-11-20 10:31:55.981436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.528 [2024-11-20 10:31:55.990482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:52.528 [2024-11-20 10:31:55.992842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.528 [2024-11-20 10:31:55.993099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:52.528 [2024-11-20 10:31:55.993124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.528 [2024-11-20 10:31:55.993416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:52.528 [2024-11-20 10:31:55.993606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:52.528 [2024-11-20 10:31:55.993625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:52.528 [2024-11-20 10:31:55.993801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.528 10:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.788 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.788 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.788 "name": "raid_bdev1", 00:08:52.788 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:52.788 "strip_size_kb": 0, 00:08:52.788 "state": "online", 00:08:52.788 "raid_level": "raid1", 00:08:52.788 "superblock": true, 00:08:52.788 "num_base_bdevs": 2, 00:08:52.788 "num_base_bdevs_discovered": 2, 00:08:52.788 "num_base_bdevs_operational": 2, 00:08:52.788 "base_bdevs_list": [ 00:08:52.788 { 00:08:52.788 "name": "pt1", 00:08:52.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.788 "is_configured": true, 00:08:52.788 "data_offset": 2048, 00:08:52.788 "data_size": 63488 00:08:52.788 }, 00:08:52.788 { 00:08:52.788 "name": "pt2", 00:08:52.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.788 "is_configured": true, 00:08:52.788 "data_offset": 2048, 00:08:52.788 "data_size": 63488 00:08:52.788 } 00:08:52.788 ] 00:08:52.788 }' 00:08:52.788 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.788 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.047 [2024-11-20 10:31:56.458028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.047 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.047 "name": "raid_bdev1", 00:08:53.047 "aliases": [ 00:08:53.047 "f95d07e7-eaa9-4645-b78c-073c14526447" 00:08:53.047 ], 00:08:53.047 "product_name": "Raid Volume", 00:08:53.047 "block_size": 512, 00:08:53.047 "num_blocks": 63488, 00:08:53.047 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:53.047 "assigned_rate_limits": { 00:08:53.047 "rw_ios_per_sec": 0, 00:08:53.047 "rw_mbytes_per_sec": 0, 00:08:53.047 "r_mbytes_per_sec": 0, 00:08:53.047 "w_mbytes_per_sec": 0 00:08:53.047 }, 00:08:53.047 "claimed": false, 00:08:53.047 "zoned": false, 00:08:53.047 "supported_io_types": { 00:08:53.047 "read": true, 00:08:53.047 "write": true, 00:08:53.047 "unmap": false, 00:08:53.047 "flush": false, 00:08:53.047 "reset": true, 00:08:53.047 "nvme_admin": false, 00:08:53.047 "nvme_io": false, 00:08:53.047 "nvme_io_md": false, 00:08:53.047 "write_zeroes": true, 00:08:53.047 "zcopy": false, 00:08:53.047 "get_zone_info": false, 00:08:53.048 "zone_management": false, 00:08:53.048 "zone_append": false, 00:08:53.048 "compare": false, 00:08:53.048 "compare_and_write": false, 00:08:53.048 "abort": false, 00:08:53.048 "seek_hole": false, 00:08:53.048 "seek_data": false, 00:08:53.048 "copy": false, 00:08:53.048 "nvme_iov_md": false 00:08:53.048 }, 00:08:53.048 "memory_domains": [ 00:08:53.048 { 00:08:53.048 "dma_device_id": "system", 00:08:53.048 "dma_device_type": 1 00:08:53.048 }, 00:08:53.048 { 00:08:53.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.048 "dma_device_type": 2 00:08:53.048 }, 00:08:53.048 { 00:08:53.048 "dma_device_id": "system", 00:08:53.048 "dma_device_type": 1 00:08:53.048 }, 00:08:53.048 { 00:08:53.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.048 "dma_device_type": 2 00:08:53.048 } 00:08:53.048 ], 00:08:53.048 "driver_specific": { 00:08:53.048 "raid": { 00:08:53.048 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:53.048 "strip_size_kb": 0, 00:08:53.048 "state": "online", 00:08:53.048 "raid_level": "raid1", 00:08:53.048 "superblock": true, 00:08:53.048 "num_base_bdevs": 2, 00:08:53.048 "num_base_bdevs_discovered": 2, 00:08:53.048 "num_base_bdevs_operational": 2, 00:08:53.048 "base_bdevs_list": [ 00:08:53.048 { 00:08:53.048 "name": "pt1", 00:08:53.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.048 "is_configured": true, 00:08:53.048 "data_offset": 2048, 00:08:53.048 "data_size": 63488 00:08:53.048 }, 00:08:53.048 { 00:08:53.048 "name": "pt2", 00:08:53.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.048 "is_configured": true, 00:08:53.048 "data_offset": 2048, 00:08:53.048 "data_size": 63488 00:08:53.048 } 00:08:53.048 ] 00:08:53.048 } 00:08:53.048 } 00:08:53.048 }' 00:08:53.048 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.307 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.308 pt2' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.308 [2024-11-20 10:31:56.677649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f95d07e7-eaa9-4645-b78c-073c14526447 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f95d07e7-eaa9-4645-b78c-073c14526447 ']' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.308 [2024-11-20 10:31:56.725183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.308 [2024-11-20 10:31:56.725303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.308 [2024-11-20 10:31:56.725451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.308 [2024-11-20 10:31:56.725557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.308 [2024-11-20 10:31:56.725613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.308 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.568 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.568 [2024-11-20 10:31:56.849043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:53.568 [2024-11-20 10:31:56.851547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:53.568 [2024-11-20 10:31:56.851628] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:53.568 [2024-11-20 10:31:56.851705] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:53.568 [2024-11-20 10:31:56.851723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.569 [2024-11-20 10:31:56.851737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:53.569 request: 00:08:53.569 { 00:08:53.569 "name": "raid_bdev1", 00:08:53.569 "raid_level": "raid1", 00:08:53.569 "base_bdevs": [ 00:08:53.569 "malloc1", 00:08:53.569 "malloc2" 00:08:53.569 ], 00:08:53.569 "superblock": false, 00:08:53.569 "method": "bdev_raid_create", 00:08:53.569 "req_id": 1 00:08:53.569 } 00:08:53.569 Got JSON-RPC error response 00:08:53.569 response: 00:08:53.569 { 00:08:53.569 "code": -17, 00:08:53.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:53.569 } 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 [2024-11-20 10:31:56.892919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:53.569 [2024-11-20 10:31:56.893006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.569 [2024-11-20 10:31:56.893029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:53.569 [2024-11-20 10:31:56.893042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.569 [2024-11-20 10:31:56.895922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.569 [2024-11-20 10:31:56.896037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:53.569 [2024-11-20 10:31:56.896154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:53.569 [2024-11-20 10:31:56.896243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:53.569 pt1 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.569 "name": "raid_bdev1", 00:08:53.569 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:53.569 "strip_size_kb": 0, 00:08:53.569 "state": "configuring", 00:08:53.569 "raid_level": "raid1", 00:08:53.569 "superblock": true, 00:08:53.569 "num_base_bdevs": 2, 00:08:53.569 "num_base_bdevs_discovered": 1, 00:08:53.569 "num_base_bdevs_operational": 2, 00:08:53.569 "base_bdevs_list": [ 00:08:53.569 { 00:08:53.569 "name": "pt1", 00:08:53.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.569 "is_configured": true, 00:08:53.569 "data_offset": 2048, 00:08:53.569 "data_size": 63488 00:08:53.569 }, 00:08:53.569 { 00:08:53.569 "name": null, 00:08:53.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.569 "is_configured": false, 00:08:53.569 "data_offset": 2048, 00:08:53.569 "data_size": 63488 00:08:53.569 } 00:08:53.569 ] 00:08:53.569 }' 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.569 10:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.137 [2024-11-20 10:31:57.316435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.137 [2024-11-20 10:31:57.316558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.137 [2024-11-20 10:31:57.316587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:54.137 [2024-11-20 10:31:57.316601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.137 [2024-11-20 10:31:57.317199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.137 [2024-11-20 10:31:57.317223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.137 [2024-11-20 10:31:57.317332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:54.137 [2024-11-20 10:31:57.317381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.137 [2024-11-20 10:31:57.317526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.137 [2024-11-20 10:31:57.317617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.137 [2024-11-20 10:31:57.317913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.137 [2024-11-20 10:31:57.318103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.137 [2024-11-20 10:31:57.318113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:54.137 [2024-11-20 10:31:57.318305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.137 pt2 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.137 "name": "raid_bdev1", 00:08:54.137 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:54.137 "strip_size_kb": 0, 00:08:54.137 "state": "online", 00:08:54.137 "raid_level": "raid1", 00:08:54.137 "superblock": true, 00:08:54.137 "num_base_bdevs": 2, 00:08:54.137 "num_base_bdevs_discovered": 2, 00:08:54.137 "num_base_bdevs_operational": 2, 00:08:54.137 "base_bdevs_list": [ 00:08:54.137 { 00:08:54.137 "name": "pt1", 00:08:54.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.137 "is_configured": true, 00:08:54.137 "data_offset": 2048, 00:08:54.137 "data_size": 63488 00:08:54.137 }, 00:08:54.137 { 00:08:54.137 "name": "pt2", 00:08:54.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.137 "is_configured": true, 00:08:54.137 "data_offset": 2048, 00:08:54.137 "data_size": 63488 00:08:54.137 } 00:08:54.137 ] 00:08:54.137 }' 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.137 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.397 [2024-11-20 10:31:57.755958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.397 "name": "raid_bdev1", 00:08:54.397 "aliases": [ 00:08:54.397 "f95d07e7-eaa9-4645-b78c-073c14526447" 00:08:54.397 ], 00:08:54.397 "product_name": "Raid Volume", 00:08:54.397 "block_size": 512, 00:08:54.397 "num_blocks": 63488, 00:08:54.397 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:54.397 "assigned_rate_limits": { 00:08:54.397 "rw_ios_per_sec": 0, 00:08:54.397 "rw_mbytes_per_sec": 0, 00:08:54.397 "r_mbytes_per_sec": 0, 00:08:54.397 "w_mbytes_per_sec": 0 00:08:54.397 }, 00:08:54.397 "claimed": false, 00:08:54.397 "zoned": false, 00:08:54.397 "supported_io_types": { 00:08:54.397 "read": true, 00:08:54.397 "write": true, 00:08:54.397 "unmap": false, 00:08:54.397 "flush": false, 00:08:54.397 "reset": true, 00:08:54.397 "nvme_admin": false, 00:08:54.397 "nvme_io": false, 00:08:54.397 "nvme_io_md": false, 00:08:54.397 "write_zeroes": true, 00:08:54.397 "zcopy": false, 00:08:54.397 "get_zone_info": false, 00:08:54.397 "zone_management": false, 00:08:54.397 "zone_append": false, 00:08:54.397 "compare": false, 00:08:54.397 "compare_and_write": false, 00:08:54.397 "abort": false, 00:08:54.397 "seek_hole": false, 00:08:54.397 "seek_data": false, 00:08:54.397 "copy": false, 00:08:54.397 "nvme_iov_md": false 00:08:54.397 }, 00:08:54.397 "memory_domains": [ 00:08:54.397 { 00:08:54.397 "dma_device_id": "system", 00:08:54.397 "dma_device_type": 1 00:08:54.397 }, 00:08:54.397 { 00:08:54.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.397 "dma_device_type": 2 00:08:54.397 }, 00:08:54.397 { 00:08:54.397 "dma_device_id": "system", 00:08:54.397 "dma_device_type": 1 00:08:54.397 }, 00:08:54.397 { 00:08:54.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.397 "dma_device_type": 2 00:08:54.397 } 00:08:54.397 ], 00:08:54.397 "driver_specific": { 00:08:54.397 "raid": { 00:08:54.397 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:54.397 "strip_size_kb": 0, 00:08:54.397 "state": "online", 00:08:54.397 "raid_level": "raid1", 00:08:54.397 "superblock": true, 00:08:54.397 "num_base_bdevs": 2, 00:08:54.397 "num_base_bdevs_discovered": 2, 00:08:54.397 "num_base_bdevs_operational": 2, 00:08:54.397 "base_bdevs_list": [ 00:08:54.397 { 00:08:54.397 "name": "pt1", 00:08:54.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.397 "is_configured": true, 00:08:54.397 "data_offset": 2048, 00:08:54.397 "data_size": 63488 00:08:54.397 }, 00:08:54.397 { 00:08:54.397 "name": "pt2", 00:08:54.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.397 "is_configured": true, 00:08:54.397 "data_offset": 2048, 00:08:54.397 "data_size": 63488 00:08:54.397 } 00:08:54.397 ] 00:08:54.397 } 00:08:54.397 } 00:08:54.397 }' 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:54.397 pt2' 00:08:54.397 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.657 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.658 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.658 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 10:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:54.658 [2024-11-20 10:31:57.983535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.658 10:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f95d07e7-eaa9-4645-b78c-073c14526447 '!=' f95d07e7-eaa9-4645-b78c-073c14526447 ']' 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 [2024-11-20 10:31:58.031233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.658 "name": "raid_bdev1", 00:08:54.658 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:54.658 "strip_size_kb": 0, 00:08:54.658 "state": "online", 00:08:54.658 "raid_level": "raid1", 00:08:54.658 "superblock": true, 00:08:54.658 "num_base_bdevs": 2, 00:08:54.658 "num_base_bdevs_discovered": 1, 00:08:54.658 "num_base_bdevs_operational": 1, 00:08:54.658 "base_bdevs_list": [ 00:08:54.658 { 00:08:54.658 "name": null, 00:08:54.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.658 "is_configured": false, 00:08:54.658 "data_offset": 0, 00:08:54.658 "data_size": 63488 00:08:54.658 }, 00:08:54.658 { 00:08:54.658 "name": "pt2", 00:08:54.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.658 "is_configured": true, 00:08:54.658 "data_offset": 2048, 00:08:54.658 "data_size": 63488 00:08:54.658 } 00:08:54.658 ] 00:08:54.658 }' 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.658 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.226 [2024-11-20 10:31:58.478492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.226 [2024-11-20 10:31:58.478541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.226 [2024-11-20 10:31:58.478654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.226 [2024-11-20 10:31:58.478716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.226 [2024-11-20 10:31:58.478730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.226 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.226 [2024-11-20 10:31:58.538291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.226 [2024-11-20 10:31:58.538413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.226 [2024-11-20 10:31:58.538436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:55.226 [2024-11-20 10:31:58.538449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.226 [2024-11-20 10:31:58.541243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.226 [2024-11-20 10:31:58.541285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.226 [2024-11-20 10:31:58.541401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.227 [2024-11-20 10:31:58.541465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.227 [2024-11-20 10:31:58.541591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:55.227 [2024-11-20 10:31:58.541611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.227 [2024-11-20 10:31:58.541852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:55.227 [2024-11-20 10:31:58.542015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:55.227 [2024-11-20 10:31:58.542025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:55.227 [2024-11-20 10:31:58.542222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.227 pt2 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.227 "name": "raid_bdev1", 00:08:55.227 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:55.227 "strip_size_kb": 0, 00:08:55.227 "state": "online", 00:08:55.227 "raid_level": "raid1", 00:08:55.227 "superblock": true, 00:08:55.227 "num_base_bdevs": 2, 00:08:55.227 "num_base_bdevs_discovered": 1, 00:08:55.227 "num_base_bdevs_operational": 1, 00:08:55.227 "base_bdevs_list": [ 00:08:55.227 { 00:08:55.227 "name": null, 00:08:55.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.227 "is_configured": false, 00:08:55.227 "data_offset": 2048, 00:08:55.227 "data_size": 63488 00:08:55.227 }, 00:08:55.227 { 00:08:55.227 "name": "pt2", 00:08:55.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.227 "is_configured": true, 00:08:55.227 "data_offset": 2048, 00:08:55.227 "data_size": 63488 00:08:55.227 } 00:08:55.227 ] 00:08:55.227 }' 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.227 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.485 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.485 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.485 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.485 [2024-11-20 10:31:58.961603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.485 [2024-11-20 10:31:58.961746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.485 [2024-11-20 10:31:58.961876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.486 [2024-11-20 10:31:58.961964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.486 [2024-11-20 10:31:58.962023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:55.745 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.745 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.745 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.745 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.745 10:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:55.745 10:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.745 [2024-11-20 10:31:59.013513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.745 [2024-11-20 10:31:59.013603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.745 [2024-11-20 10:31:59.013629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:55.745 [2024-11-20 10:31:59.013640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.745 [2024-11-20 10:31:59.016415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.745 [2024-11-20 10:31:59.016531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.745 [2024-11-20 10:31:59.016655] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.745 [2024-11-20 10:31:59.016718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.745 [2024-11-20 10:31:59.016912] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:55.745 [2024-11-20 10:31:59.016923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.745 [2024-11-20 10:31:59.016941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:55.745 [2024-11-20 10:31:59.017005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.745 [2024-11-20 10:31:59.017101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:55.745 [2024-11-20 10:31:59.017111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.745 [2024-11-20 10:31:59.017396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:55.745 [2024-11-20 10:31:59.017566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:55.745 [2024-11-20 10:31:59.017581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:55.745 [2024-11-20 10:31:59.017781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.745 pt1 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.745 "name": "raid_bdev1", 00:08:55.745 "uuid": "f95d07e7-eaa9-4645-b78c-073c14526447", 00:08:55.745 "strip_size_kb": 0, 00:08:55.745 "state": "online", 00:08:55.745 "raid_level": "raid1", 00:08:55.745 "superblock": true, 00:08:55.745 "num_base_bdevs": 2, 00:08:55.745 "num_base_bdevs_discovered": 1, 00:08:55.745 "num_base_bdevs_operational": 1, 00:08:55.745 "base_bdevs_list": [ 00:08:55.745 { 00:08:55.745 "name": null, 00:08:55.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.745 "is_configured": false, 00:08:55.745 "data_offset": 2048, 00:08:55.745 "data_size": 63488 00:08:55.745 }, 00:08:55.745 { 00:08:55.745 "name": "pt2", 00:08:55.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.745 "is_configured": true, 00:08:55.745 "data_offset": 2048, 00:08:55.745 "data_size": 63488 00:08:55.745 } 00:08:55.745 ] 00:08:55.745 }' 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.745 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.003 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:56.003 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:56.003 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.003 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.004 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.262 [2024-11-20 10:31:59.497259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f95d07e7-eaa9-4645-b78c-073c14526447 '!=' f95d07e7-eaa9-4645-b78c-073c14526447 ']' 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63367 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63367 ']' 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63367 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63367 00:08:56.262 killing process with pid 63367 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63367' 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63367 00:08:56.262 10:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63367 00:08:56.262 [2024-11-20 10:31:59.546828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.262 [2024-11-20 10:31:59.546971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.262 [2024-11-20 10:31:59.547124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.262 [2024-11-20 10:31:59.547147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:56.521 [2024-11-20 10:31:59.791124] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.899 10:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:57.899 00:08:57.899 real 0m6.233s 00:08:57.899 user 0m9.188s 00:08:57.899 sys 0m1.115s 00:08:57.899 10:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.899 10:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.899 ************************************ 00:08:57.899 END TEST raid_superblock_test 00:08:57.899 ************************************ 00:08:57.899 10:32:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:57.899 10:32:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.899 10:32:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.899 10:32:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.899 ************************************ 00:08:57.899 START TEST raid_read_error_test 00:08:57.899 ************************************ 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.899 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZFhpyAa6XK 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63697 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63697 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63697 ']' 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.900 10:32:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.900 [2024-11-20 10:32:01.327545] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:57.900 [2024-11-20 10:32:01.327806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63697 ] 00:08:58.159 [2024-11-20 10:32:01.507370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.418 [2024-11-20 10:32:01.654856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.677 [2024-11-20 10:32:01.922912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.677 [2024-11-20 10:32:01.922980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 BaseBdev1_malloc 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 true 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 [2024-11-20 10:32:02.264131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:58.937 [2024-11-20 10:32:02.264210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.937 [2024-11-20 10:32:02.264234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:58.937 [2024-11-20 10:32:02.264247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.937 [2024-11-20 10:32:02.266847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.937 [2024-11-20 10:32:02.266888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:58.937 BaseBdev1 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 BaseBdev2_malloc 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 true 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 [2024-11-20 10:32:02.341077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:58.937 [2024-11-20 10:32:02.341154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.937 [2024-11-20 10:32:02.341174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:58.937 [2024-11-20 10:32:02.341186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.937 [2024-11-20 10:32:02.343695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.937 [2024-11-20 10:32:02.343737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:58.937 BaseBdev2 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 [2024-11-20 10:32:02.353121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.937 [2024-11-20 10:32:02.355298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.937 [2024-11-20 10:32:02.355664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.937 [2024-11-20 10:32:02.355696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:58.937 [2024-11-20 10:32:02.355999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:58.937 [2024-11-20 10:32:02.356234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.937 [2024-11-20 10:32:02.356247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:58.937 [2024-11-20 10:32:02.356448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.937 "name": "raid_bdev1", 00:08:58.937 "uuid": "2137d083-b674-4407-bfa7-bf3a3b7e21f2", 00:08:58.937 "strip_size_kb": 0, 00:08:58.937 "state": "online", 00:08:58.937 "raid_level": "raid1", 00:08:58.937 "superblock": true, 00:08:58.937 "num_base_bdevs": 2, 00:08:58.937 "num_base_bdevs_discovered": 2, 00:08:58.937 "num_base_bdevs_operational": 2, 00:08:58.937 "base_bdevs_list": [ 00:08:58.937 { 00:08:58.937 "name": "BaseBdev1", 00:08:58.937 "uuid": "9c17c96a-1bb8-51cc-8070-0985a1e08a47", 00:08:58.937 "is_configured": true, 00:08:58.937 "data_offset": 2048, 00:08:58.937 "data_size": 63488 00:08:58.937 }, 00:08:58.937 { 00:08:58.937 "name": "BaseBdev2", 00:08:58.937 "uuid": "a875ccb6-a7ec-51d8-aea8-3256b6b3df8c", 00:08:58.937 "is_configured": true, 00:08:58.937 "data_offset": 2048, 00:08:58.937 "data_size": 63488 00:08:58.937 } 00:08:58.937 ] 00:08:58.937 }' 00:08:58.937 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.197 10:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.455 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:59.455 10:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.714 [2024-11-20 10:32:02.969883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.650 "name": "raid_bdev1", 00:09:00.650 "uuid": "2137d083-b674-4407-bfa7-bf3a3b7e21f2", 00:09:00.650 "strip_size_kb": 0, 00:09:00.650 "state": "online", 00:09:00.650 "raid_level": "raid1", 00:09:00.650 "superblock": true, 00:09:00.650 "num_base_bdevs": 2, 00:09:00.650 "num_base_bdevs_discovered": 2, 00:09:00.650 "num_base_bdevs_operational": 2, 00:09:00.650 "base_bdevs_list": [ 00:09:00.650 { 00:09:00.650 "name": "BaseBdev1", 00:09:00.650 "uuid": "9c17c96a-1bb8-51cc-8070-0985a1e08a47", 00:09:00.650 "is_configured": true, 00:09:00.650 "data_offset": 2048, 00:09:00.650 "data_size": 63488 00:09:00.650 }, 00:09:00.650 { 00:09:00.650 "name": "BaseBdev2", 00:09:00.650 "uuid": "a875ccb6-a7ec-51d8-aea8-3256b6b3df8c", 00:09:00.650 "is_configured": true, 00:09:00.650 "data_offset": 2048, 00:09:00.650 "data_size": 63488 00:09:00.650 } 00:09:00.650 ] 00:09:00.650 }' 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.650 10:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.908 [2024-11-20 10:32:04.354759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.908 [2024-11-20 10:32:04.354799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.908 [2024-11-20 10:32:04.357995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.908 [2024-11-20 10:32:04.358087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.908 [2024-11-20 10:32:04.358214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.908 [2024-11-20 10:32:04.358271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.908 { 00:09:00.908 "results": [ 00:09:00.908 { 00:09:00.908 "job": "raid_bdev1", 00:09:00.908 "core_mask": "0x1", 00:09:00.908 "workload": "randrw", 00:09:00.908 "percentage": 50, 00:09:00.908 "status": "finished", 00:09:00.908 "queue_depth": 1, 00:09:00.908 "io_size": 131072, 00:09:00.908 "runtime": 1.385308, 00:09:00.908 "iops": 15927.865860877147, 00:09:00.908 "mibps": 1990.9832326096434, 00:09:00.908 "io_failed": 0, 00:09:00.908 "io_timeout": 0, 00:09:00.908 "avg_latency_us": 59.66067844409678, 00:09:00.908 "min_latency_us": 24.817467248908297, 00:09:00.908 "max_latency_us": 1717.1004366812226 00:09:00.908 } 00:09:00.908 ], 00:09:00.908 "core_count": 1 00:09:00.908 } 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63697 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63697 ']' 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63697 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.908 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63697 00:09:01.169 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.169 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.169 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63697' 00:09:01.169 killing process with pid 63697 00:09:01.169 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63697 00:09:01.169 [2024-11-20 10:32:04.404298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.169 10:32:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63697 00:09:01.169 [2024-11-20 10:32:04.564326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZFhpyAa6XK 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:02.549 ************************************ 00:09:02.549 END TEST raid_read_error_test 00:09:02.549 ************************************ 00:09:02.549 00:09:02.549 real 0m4.727s 00:09:02.549 user 0m5.646s 00:09:02.549 sys 0m0.629s 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.549 10:32:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.549 10:32:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:02.549 10:32:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.549 10:32:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.549 10:32:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.549 ************************************ 00:09:02.549 START TEST raid_write_error_test 00:09:02.549 ************************************ 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.549 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4TMCqJUxRP 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63848 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63848 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63848 ']' 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.808 10:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.808 [2024-11-20 10:32:06.119188] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:02.808 [2024-11-20 10:32:06.119430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63848 ] 00:09:03.067 [2024-11-20 10:32:06.303023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.067 [2024-11-20 10:32:06.437641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.326 [2024-11-20 10:32:06.675294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.326 [2024-11-20 10:32:06.675379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.585 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.585 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.585 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.585 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.585 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.585 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 BaseBdev1_malloc 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 true 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 [2024-11-20 10:32:07.087089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.845 [2024-11-20 10:32:07.087151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.845 [2024-11-20 10:32:07.087190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:03.845 [2024-11-20 10:32:07.087202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.845 [2024-11-20 10:32:07.089657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.845 [2024-11-20 10:32:07.089701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.845 BaseBdev1 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 BaseBdev2_malloc 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 true 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 [2024-11-20 10:32:07.158230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.845 [2024-11-20 10:32:07.158294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.845 [2024-11-20 10:32:07.158314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.845 [2024-11-20 10:32:07.158326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.845 [2024-11-20 10:32:07.160786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.845 [2024-11-20 10:32:07.160884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.845 BaseBdev2 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.846 [2024-11-20 10:32:07.170270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.846 [2024-11-20 10:32:07.172390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.846 [2024-11-20 10:32:07.172614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.846 [2024-11-20 10:32:07.172632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:03.846 [2024-11-20 10:32:07.172918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:03.846 [2024-11-20 10:32:07.173142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.846 [2024-11-20 10:32:07.173155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:03.846 [2024-11-20 10:32:07.173350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.846 "name": "raid_bdev1", 00:09:03.846 "uuid": "1f88df83-ba6f-4714-92a5-a8d98820df32", 00:09:03.846 "strip_size_kb": 0, 00:09:03.846 "state": "online", 00:09:03.846 "raid_level": "raid1", 00:09:03.846 "superblock": true, 00:09:03.846 "num_base_bdevs": 2, 00:09:03.846 "num_base_bdevs_discovered": 2, 00:09:03.846 "num_base_bdevs_operational": 2, 00:09:03.846 "base_bdevs_list": [ 00:09:03.846 { 00:09:03.846 "name": "BaseBdev1", 00:09:03.846 "uuid": "3a19d097-3004-5283-8912-3c2dca299246", 00:09:03.846 "is_configured": true, 00:09:03.846 "data_offset": 2048, 00:09:03.846 "data_size": 63488 00:09:03.846 }, 00:09:03.846 { 00:09:03.846 "name": "BaseBdev2", 00:09:03.846 "uuid": "f475aa02-0bf0-57ae-b8a4-f6b1c661bd99", 00:09:03.846 "is_configured": true, 00:09:03.846 "data_offset": 2048, 00:09:03.846 "data_size": 63488 00:09:03.846 } 00:09:03.846 ] 00:09:03.846 }' 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.846 10:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.415 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:04.415 10:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:04.415 [2024-11-20 10:32:07.746777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.353 [2024-11-20 10:32:08.658985] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:05.353 [2024-11-20 10:32:08.659155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.353 [2024-11-20 10:32:08.659402] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.353 "name": "raid_bdev1", 00:09:05.353 "uuid": "1f88df83-ba6f-4714-92a5-a8d98820df32", 00:09:05.353 "strip_size_kb": 0, 00:09:05.353 "state": "online", 00:09:05.353 "raid_level": "raid1", 00:09:05.353 "superblock": true, 00:09:05.353 "num_base_bdevs": 2, 00:09:05.353 "num_base_bdevs_discovered": 1, 00:09:05.353 "num_base_bdevs_operational": 1, 00:09:05.353 "base_bdevs_list": [ 00:09:05.353 { 00:09:05.353 "name": null, 00:09:05.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.353 "is_configured": false, 00:09:05.353 "data_offset": 0, 00:09:05.353 "data_size": 63488 00:09:05.353 }, 00:09:05.353 { 00:09:05.353 "name": "BaseBdev2", 00:09:05.353 "uuid": "f475aa02-0bf0-57ae-b8a4-f6b1c661bd99", 00:09:05.353 "is_configured": true, 00:09:05.353 "data_offset": 2048, 00:09:05.353 "data_size": 63488 00:09:05.353 } 00:09:05.353 ] 00:09:05.353 }' 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.353 10:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.921 [2024-11-20 10:32:09.144551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.921 [2024-11-20 10:32:09.144648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.921 [2024-11-20 10:32:09.147681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.921 [2024-11-20 10:32:09.147729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.921 [2024-11-20 10:32:09.147794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.921 [2024-11-20 10:32:09.147805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:05.921 { 00:09:05.921 "results": [ 00:09:05.921 { 00:09:05.921 "job": "raid_bdev1", 00:09:05.921 "core_mask": "0x1", 00:09:05.921 "workload": "randrw", 00:09:05.921 "percentage": 50, 00:09:05.921 "status": "finished", 00:09:05.921 "queue_depth": 1, 00:09:05.921 "io_size": 131072, 00:09:05.921 "runtime": 1.398517, 00:09:05.921 "iops": 18503.171573888627, 00:09:05.921 "mibps": 2312.8964467360784, 00:09:05.921 "io_failed": 0, 00:09:05.921 "io_timeout": 0, 00:09:05.921 "avg_latency_us": 50.96065528677571, 00:09:05.921 "min_latency_us": 25.3764192139738, 00:09:05.921 "max_latency_us": 1695.6366812227075 00:09:05.921 } 00:09:05.921 ], 00:09:05.921 "core_count": 1 00:09:05.921 } 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63848 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63848 ']' 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63848 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63848 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63848' 00:09:05.921 killing process with pid 63848 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63848 00:09:05.921 [2024-11-20 10:32:09.191615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.921 10:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63848 00:09:05.921 [2024-11-20 10:32:09.352706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4TMCqJUxRP 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:07.296 00:09:07.296 real 0m4.685s 00:09:07.296 user 0m5.651s 00:09:07.296 sys 0m0.575s 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.296 ************************************ 00:09:07.296 END TEST raid_write_error_test 00:09:07.296 ************************************ 00:09:07.296 10:32:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.296 10:32:10 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:07.296 10:32:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:07.296 10:32:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:07.296 10:32:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.296 10:32:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.296 10:32:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.296 ************************************ 00:09:07.296 START TEST raid_state_function_test 00:09:07.296 ************************************ 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.296 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63986 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63986' 00:09:07.556 Process raid pid: 63986 00:09:07.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63986 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63986 ']' 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.556 10:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.556 [2024-11-20 10:32:10.862295] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:07.556 [2024-11-20 10:32:10.862553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.815 [2024-11-20 10:32:11.043630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.815 [2024-11-20 10:32:11.175215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.074 [2024-11-20 10:32:11.404929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.074 [2024-11-20 10:32:11.405068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.334 [2024-11-20 10:32:11.743923] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.334 [2024-11-20 10:32:11.744043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.334 [2024-11-20 10:32:11.744080] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.334 [2024-11-20 10:32:11.744109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.334 [2024-11-20 10:32:11.744132] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.334 [2024-11-20 10:32:11.744158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.334 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.335 "name": "Existed_Raid", 00:09:08.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.335 "strip_size_kb": 64, 00:09:08.335 "state": "configuring", 00:09:08.335 "raid_level": "raid0", 00:09:08.335 "superblock": false, 00:09:08.335 "num_base_bdevs": 3, 00:09:08.335 "num_base_bdevs_discovered": 0, 00:09:08.335 "num_base_bdevs_operational": 3, 00:09:08.335 "base_bdevs_list": [ 00:09:08.335 { 00:09:08.335 "name": "BaseBdev1", 00:09:08.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.335 "is_configured": false, 00:09:08.335 "data_offset": 0, 00:09:08.335 "data_size": 0 00:09:08.335 }, 00:09:08.335 { 00:09:08.335 "name": "BaseBdev2", 00:09:08.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.335 "is_configured": false, 00:09:08.335 "data_offset": 0, 00:09:08.335 "data_size": 0 00:09:08.335 }, 00:09:08.335 { 00:09:08.335 "name": "BaseBdev3", 00:09:08.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.335 "is_configured": false, 00:09:08.335 "data_offset": 0, 00:09:08.335 "data_size": 0 00:09:08.335 } 00:09:08.335 ] 00:09:08.335 }' 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.335 10:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.904 [2024-11-20 10:32:12.207099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.904 [2024-11-20 10:32:12.207139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.904 [2024-11-20 10:32:12.219084] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.904 [2024-11-20 10:32:12.219136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.904 [2024-11-20 10:32:12.219147] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.904 [2024-11-20 10:32:12.219158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.904 [2024-11-20 10:32:12.219166] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.904 [2024-11-20 10:32:12.219176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.904 [2024-11-20 10:32:12.268325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.904 BaseBdev1 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.904 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.905 [ 00:09:08.905 { 00:09:08.905 "name": "BaseBdev1", 00:09:08.905 "aliases": [ 00:09:08.905 "cdd6973a-cdc5-4714-a732-d3f1c0866203" 00:09:08.905 ], 00:09:08.905 "product_name": "Malloc disk", 00:09:08.905 "block_size": 512, 00:09:08.905 "num_blocks": 65536, 00:09:08.905 "uuid": "cdd6973a-cdc5-4714-a732-d3f1c0866203", 00:09:08.905 "assigned_rate_limits": { 00:09:08.905 "rw_ios_per_sec": 0, 00:09:08.905 "rw_mbytes_per_sec": 0, 00:09:08.905 "r_mbytes_per_sec": 0, 00:09:08.905 "w_mbytes_per_sec": 0 00:09:08.905 }, 00:09:08.905 "claimed": true, 00:09:08.905 "claim_type": "exclusive_write", 00:09:08.905 "zoned": false, 00:09:08.905 "supported_io_types": { 00:09:08.905 "read": true, 00:09:08.905 "write": true, 00:09:08.905 "unmap": true, 00:09:08.905 "flush": true, 00:09:08.905 "reset": true, 00:09:08.905 "nvme_admin": false, 00:09:08.905 "nvme_io": false, 00:09:08.905 "nvme_io_md": false, 00:09:08.905 "write_zeroes": true, 00:09:08.905 "zcopy": true, 00:09:08.905 "get_zone_info": false, 00:09:08.905 "zone_management": false, 00:09:08.905 "zone_append": false, 00:09:08.905 "compare": false, 00:09:08.905 "compare_and_write": false, 00:09:08.905 "abort": true, 00:09:08.905 "seek_hole": false, 00:09:08.905 "seek_data": false, 00:09:08.905 "copy": true, 00:09:08.905 "nvme_iov_md": false 00:09:08.905 }, 00:09:08.905 "memory_domains": [ 00:09:08.905 { 00:09:08.905 "dma_device_id": "system", 00:09:08.905 "dma_device_type": 1 00:09:08.905 }, 00:09:08.905 { 00:09:08.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.905 "dma_device_type": 2 00:09:08.905 } 00:09:08.905 ], 00:09:08.905 "driver_specific": {} 00:09:08.905 } 00:09:08.905 ] 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.905 "name": "Existed_Raid", 00:09:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.905 "strip_size_kb": 64, 00:09:08.905 "state": "configuring", 00:09:08.905 "raid_level": "raid0", 00:09:08.905 "superblock": false, 00:09:08.905 "num_base_bdevs": 3, 00:09:08.905 "num_base_bdevs_discovered": 1, 00:09:08.905 "num_base_bdevs_operational": 3, 00:09:08.905 "base_bdevs_list": [ 00:09:08.905 { 00:09:08.905 "name": "BaseBdev1", 00:09:08.905 "uuid": "cdd6973a-cdc5-4714-a732-d3f1c0866203", 00:09:08.905 "is_configured": true, 00:09:08.905 "data_offset": 0, 00:09:08.905 "data_size": 65536 00:09:08.905 }, 00:09:08.905 { 00:09:08.905 "name": "BaseBdev2", 00:09:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.905 "is_configured": false, 00:09:08.905 "data_offset": 0, 00:09:08.905 "data_size": 0 00:09:08.905 }, 00:09:08.905 { 00:09:08.905 "name": "BaseBdev3", 00:09:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.905 "is_configured": false, 00:09:08.905 "data_offset": 0, 00:09:08.905 "data_size": 0 00:09:08.905 } 00:09:08.905 ] 00:09:08.905 }' 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.905 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 [2024-11-20 10:32:12.819481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.474 [2024-11-20 10:32:12.819540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 [2024-11-20 10:32:12.831541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.474 [2024-11-20 10:32:12.833558] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.474 [2024-11-20 10:32:12.833666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.474 [2024-11-20 10:32:12.833685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.474 [2024-11-20 10:32:12.833697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.474 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.474 "name": "Existed_Raid", 00:09:09.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.474 "strip_size_kb": 64, 00:09:09.474 "state": "configuring", 00:09:09.474 "raid_level": "raid0", 00:09:09.474 "superblock": false, 00:09:09.474 "num_base_bdevs": 3, 00:09:09.474 "num_base_bdevs_discovered": 1, 00:09:09.474 "num_base_bdevs_operational": 3, 00:09:09.474 "base_bdevs_list": [ 00:09:09.474 { 00:09:09.474 "name": "BaseBdev1", 00:09:09.474 "uuid": "cdd6973a-cdc5-4714-a732-d3f1c0866203", 00:09:09.474 "is_configured": true, 00:09:09.474 "data_offset": 0, 00:09:09.474 "data_size": 65536 00:09:09.474 }, 00:09:09.474 { 00:09:09.474 "name": "BaseBdev2", 00:09:09.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.474 "is_configured": false, 00:09:09.474 "data_offset": 0, 00:09:09.474 "data_size": 0 00:09:09.474 }, 00:09:09.474 { 00:09:09.475 "name": "BaseBdev3", 00:09:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.475 "is_configured": false, 00:09:09.475 "data_offset": 0, 00:09:09.475 "data_size": 0 00:09:09.475 } 00:09:09.475 ] 00:09:09.475 }' 00:09:09.475 10:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.475 10:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 [2024-11-20 10:32:13.346585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.043 BaseBdev2 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 [ 00:09:10.043 { 00:09:10.043 "name": "BaseBdev2", 00:09:10.043 "aliases": [ 00:09:10.043 "3e195659-de37-4504-85d3-e8a69b8bba0e" 00:09:10.043 ], 00:09:10.043 "product_name": "Malloc disk", 00:09:10.043 "block_size": 512, 00:09:10.043 "num_blocks": 65536, 00:09:10.043 "uuid": "3e195659-de37-4504-85d3-e8a69b8bba0e", 00:09:10.043 "assigned_rate_limits": { 00:09:10.043 "rw_ios_per_sec": 0, 00:09:10.043 "rw_mbytes_per_sec": 0, 00:09:10.043 "r_mbytes_per_sec": 0, 00:09:10.043 "w_mbytes_per_sec": 0 00:09:10.043 }, 00:09:10.043 "claimed": true, 00:09:10.043 "claim_type": "exclusive_write", 00:09:10.043 "zoned": false, 00:09:10.043 "supported_io_types": { 00:09:10.043 "read": true, 00:09:10.043 "write": true, 00:09:10.043 "unmap": true, 00:09:10.043 "flush": true, 00:09:10.043 "reset": true, 00:09:10.043 "nvme_admin": false, 00:09:10.043 "nvme_io": false, 00:09:10.043 "nvme_io_md": false, 00:09:10.043 "write_zeroes": true, 00:09:10.043 "zcopy": true, 00:09:10.043 "get_zone_info": false, 00:09:10.043 "zone_management": false, 00:09:10.043 "zone_append": false, 00:09:10.043 "compare": false, 00:09:10.043 "compare_and_write": false, 00:09:10.043 "abort": true, 00:09:10.043 "seek_hole": false, 00:09:10.043 "seek_data": false, 00:09:10.043 "copy": true, 00:09:10.043 "nvme_iov_md": false 00:09:10.043 }, 00:09:10.043 "memory_domains": [ 00:09:10.043 { 00:09:10.043 "dma_device_id": "system", 00:09:10.043 "dma_device_type": 1 00:09:10.043 }, 00:09:10.043 { 00:09:10.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.043 "dma_device_type": 2 00:09:10.043 } 00:09:10.043 ], 00:09:10.043 "driver_specific": {} 00:09:10.043 } 00:09:10.043 ] 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.043 "name": "Existed_Raid", 00:09:10.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.043 "strip_size_kb": 64, 00:09:10.043 "state": "configuring", 00:09:10.043 "raid_level": "raid0", 00:09:10.043 "superblock": false, 00:09:10.043 "num_base_bdevs": 3, 00:09:10.043 "num_base_bdevs_discovered": 2, 00:09:10.043 "num_base_bdevs_operational": 3, 00:09:10.043 "base_bdevs_list": [ 00:09:10.043 { 00:09:10.043 "name": "BaseBdev1", 00:09:10.043 "uuid": "cdd6973a-cdc5-4714-a732-d3f1c0866203", 00:09:10.043 "is_configured": true, 00:09:10.043 "data_offset": 0, 00:09:10.043 "data_size": 65536 00:09:10.043 }, 00:09:10.043 { 00:09:10.043 "name": "BaseBdev2", 00:09:10.043 "uuid": "3e195659-de37-4504-85d3-e8a69b8bba0e", 00:09:10.043 "is_configured": true, 00:09:10.043 "data_offset": 0, 00:09:10.043 "data_size": 65536 00:09:10.043 }, 00:09:10.043 { 00:09:10.044 "name": "BaseBdev3", 00:09:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.044 "is_configured": false, 00:09:10.044 "data_offset": 0, 00:09:10.044 "data_size": 0 00:09:10.044 } 00:09:10.044 ] 00:09:10.044 }' 00:09:10.044 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.044 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.611 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.612 [2024-11-20 10:32:13.856695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.612 [2024-11-20 10:32:13.856748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.612 [2024-11-20 10:32:13.856763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:10.612 [2024-11-20 10:32:13.857056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.612 [2024-11-20 10:32:13.857253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.612 [2024-11-20 10:32:13.857263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.612 [2024-11-20 10:32:13.857578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.612 BaseBdev3 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.612 [ 00:09:10.612 { 00:09:10.612 "name": "BaseBdev3", 00:09:10.612 "aliases": [ 00:09:10.612 "e18a0131-595a-4897-9e24-87bb77e0d132" 00:09:10.612 ], 00:09:10.612 "product_name": "Malloc disk", 00:09:10.612 "block_size": 512, 00:09:10.612 "num_blocks": 65536, 00:09:10.612 "uuid": "e18a0131-595a-4897-9e24-87bb77e0d132", 00:09:10.612 "assigned_rate_limits": { 00:09:10.612 "rw_ios_per_sec": 0, 00:09:10.612 "rw_mbytes_per_sec": 0, 00:09:10.612 "r_mbytes_per_sec": 0, 00:09:10.612 "w_mbytes_per_sec": 0 00:09:10.612 }, 00:09:10.612 "claimed": true, 00:09:10.612 "claim_type": "exclusive_write", 00:09:10.612 "zoned": false, 00:09:10.612 "supported_io_types": { 00:09:10.612 "read": true, 00:09:10.612 "write": true, 00:09:10.612 "unmap": true, 00:09:10.612 "flush": true, 00:09:10.612 "reset": true, 00:09:10.612 "nvme_admin": false, 00:09:10.612 "nvme_io": false, 00:09:10.612 "nvme_io_md": false, 00:09:10.612 "write_zeroes": true, 00:09:10.612 "zcopy": true, 00:09:10.612 "get_zone_info": false, 00:09:10.612 "zone_management": false, 00:09:10.612 "zone_append": false, 00:09:10.612 "compare": false, 00:09:10.612 "compare_and_write": false, 00:09:10.612 "abort": true, 00:09:10.612 "seek_hole": false, 00:09:10.612 "seek_data": false, 00:09:10.612 "copy": true, 00:09:10.612 "nvme_iov_md": false 00:09:10.612 }, 00:09:10.612 "memory_domains": [ 00:09:10.612 { 00:09:10.612 "dma_device_id": "system", 00:09:10.612 "dma_device_type": 1 00:09:10.612 }, 00:09:10.612 { 00:09:10.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.612 "dma_device_type": 2 00:09:10.612 } 00:09:10.612 ], 00:09:10.612 "driver_specific": {} 00:09:10.612 } 00:09:10.612 ] 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.612 "name": "Existed_Raid", 00:09:10.612 "uuid": "ab481db7-85ba-488a-a9b8-7dba3b958689", 00:09:10.612 "strip_size_kb": 64, 00:09:10.612 "state": "online", 00:09:10.612 "raid_level": "raid0", 00:09:10.612 "superblock": false, 00:09:10.612 "num_base_bdevs": 3, 00:09:10.612 "num_base_bdevs_discovered": 3, 00:09:10.612 "num_base_bdevs_operational": 3, 00:09:10.612 "base_bdevs_list": [ 00:09:10.612 { 00:09:10.612 "name": "BaseBdev1", 00:09:10.612 "uuid": "cdd6973a-cdc5-4714-a732-d3f1c0866203", 00:09:10.612 "is_configured": true, 00:09:10.612 "data_offset": 0, 00:09:10.612 "data_size": 65536 00:09:10.612 }, 00:09:10.612 { 00:09:10.612 "name": "BaseBdev2", 00:09:10.612 "uuid": "3e195659-de37-4504-85d3-e8a69b8bba0e", 00:09:10.612 "is_configured": true, 00:09:10.612 "data_offset": 0, 00:09:10.612 "data_size": 65536 00:09:10.612 }, 00:09:10.612 { 00:09:10.612 "name": "BaseBdev3", 00:09:10.612 "uuid": "e18a0131-595a-4897-9e24-87bb77e0d132", 00:09:10.612 "is_configured": true, 00:09:10.612 "data_offset": 0, 00:09:10.612 "data_size": 65536 00:09:10.612 } 00:09:10.612 ] 00:09:10.612 }' 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.612 10:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.873 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.873 [2024-11-20 10:32:14.348289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.132 "name": "Existed_Raid", 00:09:11.132 "aliases": [ 00:09:11.132 "ab481db7-85ba-488a-a9b8-7dba3b958689" 00:09:11.132 ], 00:09:11.132 "product_name": "Raid Volume", 00:09:11.132 "block_size": 512, 00:09:11.132 "num_blocks": 196608, 00:09:11.132 "uuid": "ab481db7-85ba-488a-a9b8-7dba3b958689", 00:09:11.132 "assigned_rate_limits": { 00:09:11.132 "rw_ios_per_sec": 0, 00:09:11.132 "rw_mbytes_per_sec": 0, 00:09:11.132 "r_mbytes_per_sec": 0, 00:09:11.132 "w_mbytes_per_sec": 0 00:09:11.132 }, 00:09:11.132 "claimed": false, 00:09:11.132 "zoned": false, 00:09:11.132 "supported_io_types": { 00:09:11.132 "read": true, 00:09:11.132 "write": true, 00:09:11.132 "unmap": true, 00:09:11.132 "flush": true, 00:09:11.132 "reset": true, 00:09:11.132 "nvme_admin": false, 00:09:11.132 "nvme_io": false, 00:09:11.132 "nvme_io_md": false, 00:09:11.132 "write_zeroes": true, 00:09:11.132 "zcopy": false, 00:09:11.132 "get_zone_info": false, 00:09:11.132 "zone_management": false, 00:09:11.132 "zone_append": false, 00:09:11.132 "compare": false, 00:09:11.132 "compare_and_write": false, 00:09:11.132 "abort": false, 00:09:11.132 "seek_hole": false, 00:09:11.132 "seek_data": false, 00:09:11.132 "copy": false, 00:09:11.132 "nvme_iov_md": false 00:09:11.132 }, 00:09:11.132 "memory_domains": [ 00:09:11.132 { 00:09:11.132 "dma_device_id": "system", 00:09:11.132 "dma_device_type": 1 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.132 "dma_device_type": 2 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "dma_device_id": "system", 00:09:11.132 "dma_device_type": 1 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.132 "dma_device_type": 2 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "dma_device_id": "system", 00:09:11.132 "dma_device_type": 1 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.132 "dma_device_type": 2 00:09:11.132 } 00:09:11.132 ], 00:09:11.132 "driver_specific": { 00:09:11.132 "raid": { 00:09:11.132 "uuid": "ab481db7-85ba-488a-a9b8-7dba3b958689", 00:09:11.132 "strip_size_kb": 64, 00:09:11.132 "state": "online", 00:09:11.132 "raid_level": "raid0", 00:09:11.132 "superblock": false, 00:09:11.132 "num_base_bdevs": 3, 00:09:11.132 "num_base_bdevs_discovered": 3, 00:09:11.132 "num_base_bdevs_operational": 3, 00:09:11.132 "base_bdevs_list": [ 00:09:11.132 { 00:09:11.132 "name": "BaseBdev1", 00:09:11.132 "uuid": "cdd6973a-cdc5-4714-a732-d3f1c0866203", 00:09:11.132 "is_configured": true, 00:09:11.132 "data_offset": 0, 00:09:11.132 "data_size": 65536 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "name": "BaseBdev2", 00:09:11.132 "uuid": "3e195659-de37-4504-85d3-e8a69b8bba0e", 00:09:11.132 "is_configured": true, 00:09:11.132 "data_offset": 0, 00:09:11.132 "data_size": 65536 00:09:11.132 }, 00:09:11.132 { 00:09:11.132 "name": "BaseBdev3", 00:09:11.132 "uuid": "e18a0131-595a-4897-9e24-87bb77e0d132", 00:09:11.132 "is_configured": true, 00:09:11.132 "data_offset": 0, 00:09:11.132 "data_size": 65536 00:09:11.132 } 00:09:11.132 ] 00:09:11.132 } 00:09:11.132 } 00:09:11.132 }' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.132 BaseBdev2 00:09:11.132 BaseBdev3' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.132 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.133 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 [2024-11-20 10:32:14.627574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.392 [2024-11-20 10:32:14.627604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.392 [2024-11-20 10:32:14.627661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.392 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.393 "name": "Existed_Raid", 00:09:11.393 "uuid": "ab481db7-85ba-488a-a9b8-7dba3b958689", 00:09:11.393 "strip_size_kb": 64, 00:09:11.393 "state": "offline", 00:09:11.393 "raid_level": "raid0", 00:09:11.393 "superblock": false, 00:09:11.393 "num_base_bdevs": 3, 00:09:11.393 "num_base_bdevs_discovered": 2, 00:09:11.393 "num_base_bdevs_operational": 2, 00:09:11.393 "base_bdevs_list": [ 00:09:11.393 { 00:09:11.393 "name": null, 00:09:11.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.393 "is_configured": false, 00:09:11.393 "data_offset": 0, 00:09:11.393 "data_size": 65536 00:09:11.393 }, 00:09:11.393 { 00:09:11.393 "name": "BaseBdev2", 00:09:11.393 "uuid": "3e195659-de37-4504-85d3-e8a69b8bba0e", 00:09:11.393 "is_configured": true, 00:09:11.393 "data_offset": 0, 00:09:11.393 "data_size": 65536 00:09:11.393 }, 00:09:11.393 { 00:09:11.393 "name": "BaseBdev3", 00:09:11.393 "uuid": "e18a0131-595a-4897-9e24-87bb77e0d132", 00:09:11.393 "is_configured": true, 00:09:11.393 "data_offset": 0, 00:09:11.393 "data_size": 65536 00:09:11.393 } 00:09:11.393 ] 00:09:11.393 }' 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.393 10:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.959 [2024-11-20 10:32:15.246146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.959 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.959 [2024-11-20 10:32:15.409628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.959 [2024-11-20 10:32:15.409689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 BaseBdev2 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 [ 00:09:12.217 { 00:09:12.217 "name": "BaseBdev2", 00:09:12.217 "aliases": [ 00:09:12.217 "9f3f4d50-19e5-4189-957d-ebb256e2d745" 00:09:12.217 ], 00:09:12.217 "product_name": "Malloc disk", 00:09:12.217 "block_size": 512, 00:09:12.217 "num_blocks": 65536, 00:09:12.217 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:12.217 "assigned_rate_limits": { 00:09:12.217 "rw_ios_per_sec": 0, 00:09:12.217 "rw_mbytes_per_sec": 0, 00:09:12.217 "r_mbytes_per_sec": 0, 00:09:12.217 "w_mbytes_per_sec": 0 00:09:12.217 }, 00:09:12.217 "claimed": false, 00:09:12.217 "zoned": false, 00:09:12.217 "supported_io_types": { 00:09:12.217 "read": true, 00:09:12.217 "write": true, 00:09:12.217 "unmap": true, 00:09:12.217 "flush": true, 00:09:12.217 "reset": true, 00:09:12.217 "nvme_admin": false, 00:09:12.217 "nvme_io": false, 00:09:12.217 "nvme_io_md": false, 00:09:12.217 "write_zeroes": true, 00:09:12.217 "zcopy": true, 00:09:12.217 "get_zone_info": false, 00:09:12.217 "zone_management": false, 00:09:12.217 "zone_append": false, 00:09:12.217 "compare": false, 00:09:12.217 "compare_and_write": false, 00:09:12.217 "abort": true, 00:09:12.217 "seek_hole": false, 00:09:12.217 "seek_data": false, 00:09:12.217 "copy": true, 00:09:12.217 "nvme_iov_md": false 00:09:12.217 }, 00:09:12.217 "memory_domains": [ 00:09:12.217 { 00:09:12.217 "dma_device_id": "system", 00:09:12.217 "dma_device_type": 1 00:09:12.217 }, 00:09:12.217 { 00:09:12.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.217 "dma_device_type": 2 00:09:12.217 } 00:09:12.217 ], 00:09:12.217 "driver_specific": {} 00:09:12.217 } 00:09:12.217 ] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.217 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 BaseBdev3 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 [ 00:09:12.476 { 00:09:12.476 "name": "BaseBdev3", 00:09:12.476 "aliases": [ 00:09:12.476 "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc" 00:09:12.476 ], 00:09:12.476 "product_name": "Malloc disk", 00:09:12.476 "block_size": 512, 00:09:12.476 "num_blocks": 65536, 00:09:12.476 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:12.476 "assigned_rate_limits": { 00:09:12.476 "rw_ios_per_sec": 0, 00:09:12.476 "rw_mbytes_per_sec": 0, 00:09:12.476 "r_mbytes_per_sec": 0, 00:09:12.476 "w_mbytes_per_sec": 0 00:09:12.476 }, 00:09:12.476 "claimed": false, 00:09:12.476 "zoned": false, 00:09:12.476 "supported_io_types": { 00:09:12.476 "read": true, 00:09:12.476 "write": true, 00:09:12.476 "unmap": true, 00:09:12.476 "flush": true, 00:09:12.476 "reset": true, 00:09:12.476 "nvme_admin": false, 00:09:12.476 "nvme_io": false, 00:09:12.476 "nvme_io_md": false, 00:09:12.476 "write_zeroes": true, 00:09:12.476 "zcopy": true, 00:09:12.476 "get_zone_info": false, 00:09:12.476 "zone_management": false, 00:09:12.476 "zone_append": false, 00:09:12.476 "compare": false, 00:09:12.476 "compare_and_write": false, 00:09:12.476 "abort": true, 00:09:12.476 "seek_hole": false, 00:09:12.476 "seek_data": false, 00:09:12.476 "copy": true, 00:09:12.476 "nvme_iov_md": false 00:09:12.476 }, 00:09:12.476 "memory_domains": [ 00:09:12.476 { 00:09:12.476 "dma_device_id": "system", 00:09:12.476 "dma_device_type": 1 00:09:12.476 }, 00:09:12.476 { 00:09:12.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.476 "dma_device_type": 2 00:09:12.476 } 00:09:12.476 ], 00:09:12.476 "driver_specific": {} 00:09:12.476 } 00:09:12.476 ] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 [2024-11-20 10:32:15.751522] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.476 [2024-11-20 10:32:15.751636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.476 [2024-11-20 10:32:15.751707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.476 [2024-11-20 10:32:15.753960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.476 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.476 "name": "Existed_Raid", 00:09:12.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.476 "strip_size_kb": 64, 00:09:12.476 "state": "configuring", 00:09:12.476 "raid_level": "raid0", 00:09:12.476 "superblock": false, 00:09:12.476 "num_base_bdevs": 3, 00:09:12.476 "num_base_bdevs_discovered": 2, 00:09:12.476 "num_base_bdevs_operational": 3, 00:09:12.476 "base_bdevs_list": [ 00:09:12.476 { 00:09:12.476 "name": "BaseBdev1", 00:09:12.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.476 "is_configured": false, 00:09:12.476 "data_offset": 0, 00:09:12.476 "data_size": 0 00:09:12.476 }, 00:09:12.476 { 00:09:12.476 "name": "BaseBdev2", 00:09:12.476 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:12.476 "is_configured": true, 00:09:12.476 "data_offset": 0, 00:09:12.476 "data_size": 65536 00:09:12.476 }, 00:09:12.476 { 00:09:12.476 "name": "BaseBdev3", 00:09:12.477 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:12.477 "is_configured": true, 00:09:12.477 "data_offset": 0, 00:09:12.477 "data_size": 65536 00:09:12.477 } 00:09:12.477 ] 00:09:12.477 }' 00:09:12.477 10:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.477 10:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.735 [2024-11-20 10:32:16.190741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.735 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.992 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.992 "name": "Existed_Raid", 00:09:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.992 "strip_size_kb": 64, 00:09:12.992 "state": "configuring", 00:09:12.992 "raid_level": "raid0", 00:09:12.992 "superblock": false, 00:09:12.992 "num_base_bdevs": 3, 00:09:12.992 "num_base_bdevs_discovered": 1, 00:09:12.992 "num_base_bdevs_operational": 3, 00:09:12.992 "base_bdevs_list": [ 00:09:12.992 { 00:09:12.992 "name": "BaseBdev1", 00:09:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.992 "is_configured": false, 00:09:12.992 "data_offset": 0, 00:09:12.992 "data_size": 0 00:09:12.992 }, 00:09:12.992 { 00:09:12.992 "name": null, 00:09:12.992 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:12.992 "is_configured": false, 00:09:12.992 "data_offset": 0, 00:09:12.992 "data_size": 65536 00:09:12.992 }, 00:09:12.992 { 00:09:12.992 "name": "BaseBdev3", 00:09:12.992 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:12.992 "is_configured": true, 00:09:12.992 "data_offset": 0, 00:09:12.992 "data_size": 65536 00:09:12.992 } 00:09:12.992 ] 00:09:12.992 }' 00:09:12.992 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.992 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.249 [2024-11-20 10:32:16.715102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.249 BaseBdev1 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.249 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.508 [ 00:09:13.508 { 00:09:13.508 "name": "BaseBdev1", 00:09:13.508 "aliases": [ 00:09:13.508 "44a68d64-01cd-40d1-8273-92aa7fe96823" 00:09:13.508 ], 00:09:13.508 "product_name": "Malloc disk", 00:09:13.508 "block_size": 512, 00:09:13.508 "num_blocks": 65536, 00:09:13.508 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:13.508 "assigned_rate_limits": { 00:09:13.508 "rw_ios_per_sec": 0, 00:09:13.508 "rw_mbytes_per_sec": 0, 00:09:13.508 "r_mbytes_per_sec": 0, 00:09:13.508 "w_mbytes_per_sec": 0 00:09:13.508 }, 00:09:13.508 "claimed": true, 00:09:13.508 "claim_type": "exclusive_write", 00:09:13.508 "zoned": false, 00:09:13.508 "supported_io_types": { 00:09:13.508 "read": true, 00:09:13.508 "write": true, 00:09:13.508 "unmap": true, 00:09:13.508 "flush": true, 00:09:13.508 "reset": true, 00:09:13.508 "nvme_admin": false, 00:09:13.508 "nvme_io": false, 00:09:13.508 "nvme_io_md": false, 00:09:13.508 "write_zeroes": true, 00:09:13.508 "zcopy": true, 00:09:13.508 "get_zone_info": false, 00:09:13.508 "zone_management": false, 00:09:13.508 "zone_append": false, 00:09:13.508 "compare": false, 00:09:13.508 "compare_and_write": false, 00:09:13.508 "abort": true, 00:09:13.508 "seek_hole": false, 00:09:13.508 "seek_data": false, 00:09:13.508 "copy": true, 00:09:13.508 "nvme_iov_md": false 00:09:13.508 }, 00:09:13.508 "memory_domains": [ 00:09:13.508 { 00:09:13.508 "dma_device_id": "system", 00:09:13.508 "dma_device_type": 1 00:09:13.508 }, 00:09:13.508 { 00:09:13.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.508 "dma_device_type": 2 00:09:13.508 } 00:09:13.508 ], 00:09:13.508 "driver_specific": {} 00:09:13.508 } 00:09:13.508 ] 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.508 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.508 "name": "Existed_Raid", 00:09:13.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.508 "strip_size_kb": 64, 00:09:13.508 "state": "configuring", 00:09:13.508 "raid_level": "raid0", 00:09:13.509 "superblock": false, 00:09:13.509 "num_base_bdevs": 3, 00:09:13.509 "num_base_bdevs_discovered": 2, 00:09:13.509 "num_base_bdevs_operational": 3, 00:09:13.509 "base_bdevs_list": [ 00:09:13.509 { 00:09:13.509 "name": "BaseBdev1", 00:09:13.509 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:13.509 "is_configured": true, 00:09:13.509 "data_offset": 0, 00:09:13.509 "data_size": 65536 00:09:13.509 }, 00:09:13.509 { 00:09:13.509 "name": null, 00:09:13.509 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:13.509 "is_configured": false, 00:09:13.509 "data_offset": 0, 00:09:13.509 "data_size": 65536 00:09:13.509 }, 00:09:13.509 { 00:09:13.509 "name": "BaseBdev3", 00:09:13.509 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:13.509 "is_configured": true, 00:09:13.509 "data_offset": 0, 00:09:13.509 "data_size": 65536 00:09:13.509 } 00:09:13.509 ] 00:09:13.509 }' 00:09:13.509 10:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.509 10:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.768 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.768 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.768 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.768 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 [2024-11-20 10:32:17.286222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.027 "name": "Existed_Raid", 00:09:14.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.027 "strip_size_kb": 64, 00:09:14.027 "state": "configuring", 00:09:14.027 "raid_level": "raid0", 00:09:14.027 "superblock": false, 00:09:14.027 "num_base_bdevs": 3, 00:09:14.027 "num_base_bdevs_discovered": 1, 00:09:14.027 "num_base_bdevs_operational": 3, 00:09:14.027 "base_bdevs_list": [ 00:09:14.027 { 00:09:14.027 "name": "BaseBdev1", 00:09:14.027 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:14.027 "is_configured": true, 00:09:14.027 "data_offset": 0, 00:09:14.027 "data_size": 65536 00:09:14.027 }, 00:09:14.027 { 00:09:14.027 "name": null, 00:09:14.027 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:14.027 "is_configured": false, 00:09:14.027 "data_offset": 0, 00:09:14.027 "data_size": 65536 00:09:14.027 }, 00:09:14.027 { 00:09:14.027 "name": null, 00:09:14.027 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:14.027 "is_configured": false, 00:09:14.027 "data_offset": 0, 00:09:14.027 "data_size": 65536 00:09:14.027 } 00:09:14.027 ] 00:09:14.027 }' 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.027 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.286 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.286 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.286 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.286 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.286 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.544 [2024-11-20 10:32:17.789455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.544 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.545 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.545 "name": "Existed_Raid", 00:09:14.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.545 "strip_size_kb": 64, 00:09:14.545 "state": "configuring", 00:09:14.545 "raid_level": "raid0", 00:09:14.545 "superblock": false, 00:09:14.545 "num_base_bdevs": 3, 00:09:14.545 "num_base_bdevs_discovered": 2, 00:09:14.545 "num_base_bdevs_operational": 3, 00:09:14.545 "base_bdevs_list": [ 00:09:14.545 { 00:09:14.545 "name": "BaseBdev1", 00:09:14.545 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:14.545 "is_configured": true, 00:09:14.545 "data_offset": 0, 00:09:14.545 "data_size": 65536 00:09:14.545 }, 00:09:14.545 { 00:09:14.545 "name": null, 00:09:14.545 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:14.545 "is_configured": false, 00:09:14.545 "data_offset": 0, 00:09:14.545 "data_size": 65536 00:09:14.545 }, 00:09:14.545 { 00:09:14.545 "name": "BaseBdev3", 00:09:14.545 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:14.545 "is_configured": true, 00:09:14.545 "data_offset": 0, 00:09:14.545 "data_size": 65536 00:09:14.545 } 00:09:14.545 ] 00:09:14.545 }' 00:09:14.545 10:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.545 10:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.803 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.803 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.803 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.803 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.803 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.061 [2024-11-20 10:32:18.300584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.061 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.062 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.062 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.062 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.062 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.062 "name": "Existed_Raid", 00:09:15.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.062 "strip_size_kb": 64, 00:09:15.062 "state": "configuring", 00:09:15.062 "raid_level": "raid0", 00:09:15.062 "superblock": false, 00:09:15.062 "num_base_bdevs": 3, 00:09:15.062 "num_base_bdevs_discovered": 1, 00:09:15.062 "num_base_bdevs_operational": 3, 00:09:15.062 "base_bdevs_list": [ 00:09:15.062 { 00:09:15.062 "name": null, 00:09:15.062 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:15.062 "is_configured": false, 00:09:15.062 "data_offset": 0, 00:09:15.062 "data_size": 65536 00:09:15.062 }, 00:09:15.062 { 00:09:15.062 "name": null, 00:09:15.062 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:15.062 "is_configured": false, 00:09:15.062 "data_offset": 0, 00:09:15.062 "data_size": 65536 00:09:15.062 }, 00:09:15.062 { 00:09:15.062 "name": "BaseBdev3", 00:09:15.062 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:15.062 "is_configured": true, 00:09:15.062 "data_offset": 0, 00:09:15.062 "data_size": 65536 00:09:15.062 } 00:09:15.062 ] 00:09:15.062 }' 00:09:15.062 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.062 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.629 [2024-11-20 10:32:18.943344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.629 "name": "Existed_Raid", 00:09:15.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.629 "strip_size_kb": 64, 00:09:15.629 "state": "configuring", 00:09:15.629 "raid_level": "raid0", 00:09:15.629 "superblock": false, 00:09:15.629 "num_base_bdevs": 3, 00:09:15.629 "num_base_bdevs_discovered": 2, 00:09:15.629 "num_base_bdevs_operational": 3, 00:09:15.629 "base_bdevs_list": [ 00:09:15.629 { 00:09:15.629 "name": null, 00:09:15.629 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:15.629 "is_configured": false, 00:09:15.629 "data_offset": 0, 00:09:15.629 "data_size": 65536 00:09:15.629 }, 00:09:15.629 { 00:09:15.629 "name": "BaseBdev2", 00:09:15.629 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:15.629 "is_configured": true, 00:09:15.629 "data_offset": 0, 00:09:15.629 "data_size": 65536 00:09:15.629 }, 00:09:15.629 { 00:09:15.629 "name": "BaseBdev3", 00:09:15.629 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:15.629 "is_configured": true, 00:09:15.629 "data_offset": 0, 00:09:15.629 "data_size": 65536 00:09:15.629 } 00:09:15.629 ] 00:09:15.629 }' 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.629 10:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.197 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 44a68d64-01cd-40d1-8273-92aa7fe96823 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 [2024-11-20 10:32:19.518789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.198 [2024-11-20 10:32:19.518840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:16.198 [2024-11-20 10:32:19.518850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:16.198 [2024-11-20 10:32:19.519095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:16.198 [2024-11-20 10:32:19.519249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:16.198 [2024-11-20 10:32:19.519258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:16.198 [2024-11-20 10:32:19.519541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.198 NewBaseBdev 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 [ 00:09:16.198 { 00:09:16.198 "name": "NewBaseBdev", 00:09:16.198 "aliases": [ 00:09:16.198 "44a68d64-01cd-40d1-8273-92aa7fe96823" 00:09:16.198 ], 00:09:16.198 "product_name": "Malloc disk", 00:09:16.198 "block_size": 512, 00:09:16.198 "num_blocks": 65536, 00:09:16.198 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:16.198 "assigned_rate_limits": { 00:09:16.198 "rw_ios_per_sec": 0, 00:09:16.198 "rw_mbytes_per_sec": 0, 00:09:16.198 "r_mbytes_per_sec": 0, 00:09:16.198 "w_mbytes_per_sec": 0 00:09:16.198 }, 00:09:16.198 "claimed": true, 00:09:16.198 "claim_type": "exclusive_write", 00:09:16.198 "zoned": false, 00:09:16.198 "supported_io_types": { 00:09:16.198 "read": true, 00:09:16.198 "write": true, 00:09:16.198 "unmap": true, 00:09:16.198 "flush": true, 00:09:16.198 "reset": true, 00:09:16.198 "nvme_admin": false, 00:09:16.198 "nvme_io": false, 00:09:16.198 "nvme_io_md": false, 00:09:16.198 "write_zeroes": true, 00:09:16.198 "zcopy": true, 00:09:16.198 "get_zone_info": false, 00:09:16.198 "zone_management": false, 00:09:16.198 "zone_append": false, 00:09:16.198 "compare": false, 00:09:16.198 "compare_and_write": false, 00:09:16.198 "abort": true, 00:09:16.198 "seek_hole": false, 00:09:16.198 "seek_data": false, 00:09:16.198 "copy": true, 00:09:16.198 "nvme_iov_md": false 00:09:16.198 }, 00:09:16.198 "memory_domains": [ 00:09:16.198 { 00:09:16.198 "dma_device_id": "system", 00:09:16.198 "dma_device_type": 1 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.198 "dma_device_type": 2 00:09:16.198 } 00:09:16.198 ], 00:09:16.198 "driver_specific": {} 00:09:16.198 } 00:09:16.198 ] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.198 "name": "Existed_Raid", 00:09:16.198 "uuid": "1df8aa81-1726-4a6d-a2c9-d8f0af0ce905", 00:09:16.198 "strip_size_kb": 64, 00:09:16.198 "state": "online", 00:09:16.198 "raid_level": "raid0", 00:09:16.198 "superblock": false, 00:09:16.198 "num_base_bdevs": 3, 00:09:16.198 "num_base_bdevs_discovered": 3, 00:09:16.198 "num_base_bdevs_operational": 3, 00:09:16.198 "base_bdevs_list": [ 00:09:16.198 { 00:09:16.198 "name": "NewBaseBdev", 00:09:16.198 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:16.198 "is_configured": true, 00:09:16.198 "data_offset": 0, 00:09:16.198 "data_size": 65536 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "name": "BaseBdev2", 00:09:16.198 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:16.198 "is_configured": true, 00:09:16.198 "data_offset": 0, 00:09:16.198 "data_size": 65536 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "name": "BaseBdev3", 00:09:16.198 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:16.198 "is_configured": true, 00:09:16.198 "data_offset": 0, 00:09:16.198 "data_size": 65536 00:09:16.198 } 00:09:16.198 ] 00:09:16.198 }' 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.198 10:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.766 [2024-11-20 10:32:20.018455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.766 "name": "Existed_Raid", 00:09:16.766 "aliases": [ 00:09:16.766 "1df8aa81-1726-4a6d-a2c9-d8f0af0ce905" 00:09:16.766 ], 00:09:16.766 "product_name": "Raid Volume", 00:09:16.766 "block_size": 512, 00:09:16.766 "num_blocks": 196608, 00:09:16.766 "uuid": "1df8aa81-1726-4a6d-a2c9-d8f0af0ce905", 00:09:16.766 "assigned_rate_limits": { 00:09:16.766 "rw_ios_per_sec": 0, 00:09:16.766 "rw_mbytes_per_sec": 0, 00:09:16.766 "r_mbytes_per_sec": 0, 00:09:16.766 "w_mbytes_per_sec": 0 00:09:16.766 }, 00:09:16.766 "claimed": false, 00:09:16.766 "zoned": false, 00:09:16.766 "supported_io_types": { 00:09:16.766 "read": true, 00:09:16.766 "write": true, 00:09:16.766 "unmap": true, 00:09:16.766 "flush": true, 00:09:16.766 "reset": true, 00:09:16.766 "nvme_admin": false, 00:09:16.766 "nvme_io": false, 00:09:16.766 "nvme_io_md": false, 00:09:16.766 "write_zeroes": true, 00:09:16.766 "zcopy": false, 00:09:16.766 "get_zone_info": false, 00:09:16.766 "zone_management": false, 00:09:16.766 "zone_append": false, 00:09:16.766 "compare": false, 00:09:16.766 "compare_and_write": false, 00:09:16.766 "abort": false, 00:09:16.766 "seek_hole": false, 00:09:16.766 "seek_data": false, 00:09:16.766 "copy": false, 00:09:16.766 "nvme_iov_md": false 00:09:16.766 }, 00:09:16.766 "memory_domains": [ 00:09:16.766 { 00:09:16.766 "dma_device_id": "system", 00:09:16.766 "dma_device_type": 1 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.766 "dma_device_type": 2 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "dma_device_id": "system", 00:09:16.766 "dma_device_type": 1 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.766 "dma_device_type": 2 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "dma_device_id": "system", 00:09:16.766 "dma_device_type": 1 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.766 "dma_device_type": 2 00:09:16.766 } 00:09:16.766 ], 00:09:16.766 "driver_specific": { 00:09:16.766 "raid": { 00:09:16.766 "uuid": "1df8aa81-1726-4a6d-a2c9-d8f0af0ce905", 00:09:16.766 "strip_size_kb": 64, 00:09:16.766 "state": "online", 00:09:16.766 "raid_level": "raid0", 00:09:16.766 "superblock": false, 00:09:16.766 "num_base_bdevs": 3, 00:09:16.766 "num_base_bdevs_discovered": 3, 00:09:16.766 "num_base_bdevs_operational": 3, 00:09:16.766 "base_bdevs_list": [ 00:09:16.766 { 00:09:16.766 "name": "NewBaseBdev", 00:09:16.766 "uuid": "44a68d64-01cd-40d1-8273-92aa7fe96823", 00:09:16.766 "is_configured": true, 00:09:16.766 "data_offset": 0, 00:09:16.766 "data_size": 65536 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "name": "BaseBdev2", 00:09:16.766 "uuid": "9f3f4d50-19e5-4189-957d-ebb256e2d745", 00:09:16.766 "is_configured": true, 00:09:16.766 "data_offset": 0, 00:09:16.766 "data_size": 65536 00:09:16.766 }, 00:09:16.766 { 00:09:16.766 "name": "BaseBdev3", 00:09:16.766 "uuid": "96b0f4f8-15cc-45e2-9595-45f3b2ce04cc", 00:09:16.766 "is_configured": true, 00:09:16.766 "data_offset": 0, 00:09:16.766 "data_size": 65536 00:09:16.766 } 00:09:16.766 ] 00:09:16.766 } 00:09:16.766 } 00:09:16.766 }' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.766 BaseBdev2 00:09:16.766 BaseBdev3' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.766 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.024 [2024-11-20 10:32:20.265637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.024 [2024-11-20 10:32:20.265665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.024 [2024-11-20 10:32:20.265745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.024 [2024-11-20 10:32:20.265800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.024 [2024-11-20 10:32:20.265813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63986 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63986 ']' 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63986 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63986 00:09:17.024 killing process with pid 63986 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63986' 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63986 00:09:17.024 [2024-11-20 10:32:20.311615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.024 10:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63986 00:09:17.282 [2024-11-20 10:32:20.628058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.660 00:09:18.660 real 0m11.036s 00:09:18.660 user 0m17.618s 00:09:18.660 sys 0m1.851s 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.660 ************************************ 00:09:18.660 END TEST raid_state_function_test 00:09:18.660 ************************************ 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.660 10:32:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:18.660 10:32:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.660 10:32:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.660 10:32:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.660 ************************************ 00:09:18.660 START TEST raid_state_function_test_sb 00:09:18.660 ************************************ 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64613 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64613' 00:09:18.660 Process raid pid: 64613 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64613 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64613 ']' 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.660 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.660 [2024-11-20 10:32:21.951492] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:18.660 [2024-11-20 10:32:21.951610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.660 [2024-11-20 10:32:22.127562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.920 [2024-11-20 10:32:22.242971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.180 [2024-11-20 10:32:22.459069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.180 [2024-11-20 10:32:22.459102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 [2024-11-20 10:32:22.813643] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.440 [2024-11-20 10:32:22.813804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.440 [2024-11-20 10:32:22.813862] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.440 [2024-11-20 10:32:22.813904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.440 [2024-11-20 10:32:22.813953] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.440 [2024-11-20 10:32:22.813995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.440 "name": "Existed_Raid", 00:09:19.440 "uuid": "bda4b2a1-b8ea-423c-bc8e-e133232ad2d8", 00:09:19.440 "strip_size_kb": 64, 00:09:19.440 "state": "configuring", 00:09:19.440 "raid_level": "raid0", 00:09:19.440 "superblock": true, 00:09:19.440 "num_base_bdevs": 3, 00:09:19.440 "num_base_bdevs_discovered": 0, 00:09:19.440 "num_base_bdevs_operational": 3, 00:09:19.440 "base_bdevs_list": [ 00:09:19.440 { 00:09:19.440 "name": "BaseBdev1", 00:09:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.440 "is_configured": false, 00:09:19.440 "data_offset": 0, 00:09:19.440 "data_size": 0 00:09:19.440 }, 00:09:19.440 { 00:09:19.440 "name": "BaseBdev2", 00:09:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.440 "is_configured": false, 00:09:19.440 "data_offset": 0, 00:09:19.440 "data_size": 0 00:09:19.440 }, 00:09:19.440 { 00:09:19.440 "name": "BaseBdev3", 00:09:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.440 "is_configured": false, 00:09:19.440 "data_offset": 0, 00:09:19.440 "data_size": 0 00:09:19.440 } 00:09:19.440 ] 00:09:19.440 }' 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.440 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.009 [2024-11-20 10:32:23.268766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.009 [2024-11-20 10:32:23.268806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.009 [2024-11-20 10:32:23.280732] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.009 [2024-11-20 10:32:23.280823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.009 [2024-11-20 10:32:23.280855] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.009 [2024-11-20 10:32:23.280878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.009 [2024-11-20 10:32:23.280904] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.009 [2024-11-20 10:32:23.280928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.009 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.010 [2024-11-20 10:32:23.330358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.010 BaseBdev1 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.010 [ 00:09:20.010 { 00:09:20.010 "name": "BaseBdev1", 00:09:20.010 "aliases": [ 00:09:20.010 "a06f441c-149b-45bd-b431-6334493832c8" 00:09:20.010 ], 00:09:20.010 "product_name": "Malloc disk", 00:09:20.010 "block_size": 512, 00:09:20.010 "num_blocks": 65536, 00:09:20.010 "uuid": "a06f441c-149b-45bd-b431-6334493832c8", 00:09:20.010 "assigned_rate_limits": { 00:09:20.010 "rw_ios_per_sec": 0, 00:09:20.010 "rw_mbytes_per_sec": 0, 00:09:20.010 "r_mbytes_per_sec": 0, 00:09:20.010 "w_mbytes_per_sec": 0 00:09:20.010 }, 00:09:20.010 "claimed": true, 00:09:20.010 "claim_type": "exclusive_write", 00:09:20.010 "zoned": false, 00:09:20.010 "supported_io_types": { 00:09:20.010 "read": true, 00:09:20.010 "write": true, 00:09:20.010 "unmap": true, 00:09:20.010 "flush": true, 00:09:20.010 "reset": true, 00:09:20.010 "nvme_admin": false, 00:09:20.010 "nvme_io": false, 00:09:20.010 "nvme_io_md": false, 00:09:20.010 "write_zeroes": true, 00:09:20.010 "zcopy": true, 00:09:20.010 "get_zone_info": false, 00:09:20.010 "zone_management": false, 00:09:20.010 "zone_append": false, 00:09:20.010 "compare": false, 00:09:20.010 "compare_and_write": false, 00:09:20.010 "abort": true, 00:09:20.010 "seek_hole": false, 00:09:20.010 "seek_data": false, 00:09:20.010 "copy": true, 00:09:20.010 "nvme_iov_md": false 00:09:20.010 }, 00:09:20.010 "memory_domains": [ 00:09:20.010 { 00:09:20.010 "dma_device_id": "system", 00:09:20.010 "dma_device_type": 1 00:09:20.010 }, 00:09:20.010 { 00:09:20.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.010 "dma_device_type": 2 00:09:20.010 } 00:09:20.010 ], 00:09:20.010 "driver_specific": {} 00:09:20.010 } 00:09:20.010 ] 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.010 "name": "Existed_Raid", 00:09:20.010 "uuid": "eee5a82c-c300-4eaa-9494-771551c62767", 00:09:20.010 "strip_size_kb": 64, 00:09:20.010 "state": "configuring", 00:09:20.010 "raid_level": "raid0", 00:09:20.010 "superblock": true, 00:09:20.010 "num_base_bdevs": 3, 00:09:20.010 "num_base_bdevs_discovered": 1, 00:09:20.010 "num_base_bdevs_operational": 3, 00:09:20.010 "base_bdevs_list": [ 00:09:20.010 { 00:09:20.010 "name": "BaseBdev1", 00:09:20.010 "uuid": "a06f441c-149b-45bd-b431-6334493832c8", 00:09:20.010 "is_configured": true, 00:09:20.010 "data_offset": 2048, 00:09:20.010 "data_size": 63488 00:09:20.010 }, 00:09:20.010 { 00:09:20.010 "name": "BaseBdev2", 00:09:20.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.010 "is_configured": false, 00:09:20.010 "data_offset": 0, 00:09:20.010 "data_size": 0 00:09:20.010 }, 00:09:20.010 { 00:09:20.010 "name": "BaseBdev3", 00:09:20.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.010 "is_configured": false, 00:09:20.010 "data_offset": 0, 00:09:20.010 "data_size": 0 00:09:20.010 } 00:09:20.010 ] 00:09:20.010 }' 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.010 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.580 [2024-11-20 10:32:23.817594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.580 [2024-11-20 10:32:23.817650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.580 [2024-11-20 10:32:23.829642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.580 [2024-11-20 10:32:23.831677] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.580 [2024-11-20 10:32:23.831773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.580 [2024-11-20 10:32:23.831809] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.580 [2024-11-20 10:32:23.831853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.580 "name": "Existed_Raid", 00:09:20.580 "uuid": "479f25b2-2877-47fb-945c-5085658c1677", 00:09:20.580 "strip_size_kb": 64, 00:09:20.580 "state": "configuring", 00:09:20.580 "raid_level": "raid0", 00:09:20.580 "superblock": true, 00:09:20.580 "num_base_bdevs": 3, 00:09:20.580 "num_base_bdevs_discovered": 1, 00:09:20.580 "num_base_bdevs_operational": 3, 00:09:20.580 "base_bdevs_list": [ 00:09:20.580 { 00:09:20.580 "name": "BaseBdev1", 00:09:20.580 "uuid": "a06f441c-149b-45bd-b431-6334493832c8", 00:09:20.580 "is_configured": true, 00:09:20.580 "data_offset": 2048, 00:09:20.580 "data_size": 63488 00:09:20.580 }, 00:09:20.580 { 00:09:20.580 "name": "BaseBdev2", 00:09:20.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.580 "is_configured": false, 00:09:20.580 "data_offset": 0, 00:09:20.580 "data_size": 0 00:09:20.580 }, 00:09:20.580 { 00:09:20.580 "name": "BaseBdev3", 00:09:20.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.580 "is_configured": false, 00:09:20.580 "data_offset": 0, 00:09:20.580 "data_size": 0 00:09:20.580 } 00:09:20.580 ] 00:09:20.580 }' 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.580 10:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.840 [2024-11-20 10:32:24.286824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.840 BaseBdev2 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.840 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.840 [ 00:09:20.840 { 00:09:20.840 "name": "BaseBdev2", 00:09:20.840 "aliases": [ 00:09:20.840 "72d653bc-82eb-440a-8fbe-c1631ed9a258" 00:09:20.840 ], 00:09:20.840 "product_name": "Malloc disk", 00:09:20.841 "block_size": 512, 00:09:20.841 "num_blocks": 65536, 00:09:20.841 "uuid": "72d653bc-82eb-440a-8fbe-c1631ed9a258", 00:09:20.841 "assigned_rate_limits": { 00:09:20.841 "rw_ios_per_sec": 0, 00:09:20.841 "rw_mbytes_per_sec": 0, 00:09:20.841 "r_mbytes_per_sec": 0, 00:09:20.841 "w_mbytes_per_sec": 0 00:09:20.841 }, 00:09:20.841 "claimed": true, 00:09:20.841 "claim_type": "exclusive_write", 00:09:20.841 "zoned": false, 00:09:20.841 "supported_io_types": { 00:09:20.841 "read": true, 00:09:21.102 "write": true, 00:09:21.102 "unmap": true, 00:09:21.102 "flush": true, 00:09:21.102 "reset": true, 00:09:21.102 "nvme_admin": false, 00:09:21.102 "nvme_io": false, 00:09:21.102 "nvme_io_md": false, 00:09:21.102 "write_zeroes": true, 00:09:21.102 "zcopy": true, 00:09:21.102 "get_zone_info": false, 00:09:21.102 "zone_management": false, 00:09:21.102 "zone_append": false, 00:09:21.102 "compare": false, 00:09:21.102 "compare_and_write": false, 00:09:21.102 "abort": true, 00:09:21.102 "seek_hole": false, 00:09:21.102 "seek_data": false, 00:09:21.102 "copy": true, 00:09:21.102 "nvme_iov_md": false 00:09:21.102 }, 00:09:21.102 "memory_domains": [ 00:09:21.102 { 00:09:21.102 "dma_device_id": "system", 00:09:21.102 "dma_device_type": 1 00:09:21.102 }, 00:09:21.102 { 00:09:21.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.102 "dma_device_type": 2 00:09:21.102 } 00:09:21.102 ], 00:09:21.103 "driver_specific": {} 00:09:21.103 } 00:09:21.103 ] 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.103 "name": "Existed_Raid", 00:09:21.103 "uuid": "479f25b2-2877-47fb-945c-5085658c1677", 00:09:21.103 "strip_size_kb": 64, 00:09:21.103 "state": "configuring", 00:09:21.103 "raid_level": "raid0", 00:09:21.103 "superblock": true, 00:09:21.103 "num_base_bdevs": 3, 00:09:21.103 "num_base_bdevs_discovered": 2, 00:09:21.103 "num_base_bdevs_operational": 3, 00:09:21.103 "base_bdevs_list": [ 00:09:21.103 { 00:09:21.103 "name": "BaseBdev1", 00:09:21.103 "uuid": "a06f441c-149b-45bd-b431-6334493832c8", 00:09:21.103 "is_configured": true, 00:09:21.103 "data_offset": 2048, 00:09:21.103 "data_size": 63488 00:09:21.103 }, 00:09:21.103 { 00:09:21.103 "name": "BaseBdev2", 00:09:21.103 "uuid": "72d653bc-82eb-440a-8fbe-c1631ed9a258", 00:09:21.103 "is_configured": true, 00:09:21.103 "data_offset": 2048, 00:09:21.103 "data_size": 63488 00:09:21.103 }, 00:09:21.103 { 00:09:21.103 "name": "BaseBdev3", 00:09:21.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.103 "is_configured": false, 00:09:21.103 "data_offset": 0, 00:09:21.103 "data_size": 0 00:09:21.103 } 00:09:21.103 ] 00:09:21.103 }' 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.103 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.363 [2024-11-20 10:32:24.793993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.363 [2024-11-20 10:32:24.794424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.363 BaseBdev3 00:09:21.363 [2024-11-20 10:32:24.794494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.363 [2024-11-20 10:32:24.794803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.363 [2024-11-20 10:32:24.794961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.363 [2024-11-20 10:32:24.794971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.363 [2024-11-20 10:32:24.795162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.363 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.363 [ 00:09:21.363 { 00:09:21.363 "name": "BaseBdev3", 00:09:21.363 "aliases": [ 00:09:21.363 "d5628344-5b4a-4e91-aa62-9bb3dad1d45a" 00:09:21.363 ], 00:09:21.363 "product_name": "Malloc disk", 00:09:21.363 "block_size": 512, 00:09:21.363 "num_blocks": 65536, 00:09:21.363 "uuid": "d5628344-5b4a-4e91-aa62-9bb3dad1d45a", 00:09:21.363 "assigned_rate_limits": { 00:09:21.363 "rw_ios_per_sec": 0, 00:09:21.363 "rw_mbytes_per_sec": 0, 00:09:21.363 "r_mbytes_per_sec": 0, 00:09:21.363 "w_mbytes_per_sec": 0 00:09:21.363 }, 00:09:21.363 "claimed": true, 00:09:21.363 "claim_type": "exclusive_write", 00:09:21.363 "zoned": false, 00:09:21.363 "supported_io_types": { 00:09:21.363 "read": true, 00:09:21.363 "write": true, 00:09:21.363 "unmap": true, 00:09:21.363 "flush": true, 00:09:21.363 "reset": true, 00:09:21.363 "nvme_admin": false, 00:09:21.363 "nvme_io": false, 00:09:21.363 "nvme_io_md": false, 00:09:21.363 "write_zeroes": true, 00:09:21.363 "zcopy": true, 00:09:21.363 "get_zone_info": false, 00:09:21.363 "zone_management": false, 00:09:21.363 "zone_append": false, 00:09:21.363 "compare": false, 00:09:21.363 "compare_and_write": false, 00:09:21.363 "abort": true, 00:09:21.363 "seek_hole": false, 00:09:21.363 "seek_data": false, 00:09:21.363 "copy": true, 00:09:21.363 "nvme_iov_md": false 00:09:21.363 }, 00:09:21.363 "memory_domains": [ 00:09:21.363 { 00:09:21.363 "dma_device_id": "system", 00:09:21.363 "dma_device_type": 1 00:09:21.363 }, 00:09:21.363 { 00:09:21.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.363 "dma_device_type": 2 00:09:21.364 } 00:09:21.364 ], 00:09:21.364 "driver_specific": {} 00:09:21.364 } 00:09:21.364 ] 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.364 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.623 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.623 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.623 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.623 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.623 "name": "Existed_Raid", 00:09:21.623 "uuid": "479f25b2-2877-47fb-945c-5085658c1677", 00:09:21.623 "strip_size_kb": 64, 00:09:21.623 "state": "online", 00:09:21.623 "raid_level": "raid0", 00:09:21.623 "superblock": true, 00:09:21.623 "num_base_bdevs": 3, 00:09:21.624 "num_base_bdevs_discovered": 3, 00:09:21.624 "num_base_bdevs_operational": 3, 00:09:21.624 "base_bdevs_list": [ 00:09:21.624 { 00:09:21.624 "name": "BaseBdev1", 00:09:21.624 "uuid": "a06f441c-149b-45bd-b431-6334493832c8", 00:09:21.624 "is_configured": true, 00:09:21.624 "data_offset": 2048, 00:09:21.624 "data_size": 63488 00:09:21.624 }, 00:09:21.624 { 00:09:21.624 "name": "BaseBdev2", 00:09:21.624 "uuid": "72d653bc-82eb-440a-8fbe-c1631ed9a258", 00:09:21.624 "is_configured": true, 00:09:21.624 "data_offset": 2048, 00:09:21.624 "data_size": 63488 00:09:21.624 }, 00:09:21.624 { 00:09:21.624 "name": "BaseBdev3", 00:09:21.624 "uuid": "d5628344-5b4a-4e91-aa62-9bb3dad1d45a", 00:09:21.624 "is_configured": true, 00:09:21.624 "data_offset": 2048, 00:09:21.624 "data_size": 63488 00:09:21.624 } 00:09:21.624 ] 00:09:21.624 }' 00:09:21.624 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.624 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.882 [2024-11-20 10:32:25.269637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.882 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.882 "name": "Existed_Raid", 00:09:21.882 "aliases": [ 00:09:21.882 "479f25b2-2877-47fb-945c-5085658c1677" 00:09:21.882 ], 00:09:21.882 "product_name": "Raid Volume", 00:09:21.882 "block_size": 512, 00:09:21.882 "num_blocks": 190464, 00:09:21.882 "uuid": "479f25b2-2877-47fb-945c-5085658c1677", 00:09:21.882 "assigned_rate_limits": { 00:09:21.882 "rw_ios_per_sec": 0, 00:09:21.882 "rw_mbytes_per_sec": 0, 00:09:21.882 "r_mbytes_per_sec": 0, 00:09:21.882 "w_mbytes_per_sec": 0 00:09:21.882 }, 00:09:21.882 "claimed": false, 00:09:21.882 "zoned": false, 00:09:21.882 "supported_io_types": { 00:09:21.882 "read": true, 00:09:21.882 "write": true, 00:09:21.882 "unmap": true, 00:09:21.882 "flush": true, 00:09:21.882 "reset": true, 00:09:21.882 "nvme_admin": false, 00:09:21.882 "nvme_io": false, 00:09:21.882 "nvme_io_md": false, 00:09:21.882 "write_zeroes": true, 00:09:21.882 "zcopy": false, 00:09:21.882 "get_zone_info": false, 00:09:21.882 "zone_management": false, 00:09:21.882 "zone_append": false, 00:09:21.882 "compare": false, 00:09:21.882 "compare_and_write": false, 00:09:21.882 "abort": false, 00:09:21.882 "seek_hole": false, 00:09:21.883 "seek_data": false, 00:09:21.883 "copy": false, 00:09:21.883 "nvme_iov_md": false 00:09:21.883 }, 00:09:21.883 "memory_domains": [ 00:09:21.883 { 00:09:21.883 "dma_device_id": "system", 00:09:21.883 "dma_device_type": 1 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.883 "dma_device_type": 2 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "dma_device_id": "system", 00:09:21.883 "dma_device_type": 1 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.883 "dma_device_type": 2 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "dma_device_id": "system", 00:09:21.883 "dma_device_type": 1 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.883 "dma_device_type": 2 00:09:21.883 } 00:09:21.883 ], 00:09:21.883 "driver_specific": { 00:09:21.883 "raid": { 00:09:21.883 "uuid": "479f25b2-2877-47fb-945c-5085658c1677", 00:09:21.883 "strip_size_kb": 64, 00:09:21.883 "state": "online", 00:09:21.883 "raid_level": "raid0", 00:09:21.883 "superblock": true, 00:09:21.883 "num_base_bdevs": 3, 00:09:21.883 "num_base_bdevs_discovered": 3, 00:09:21.883 "num_base_bdevs_operational": 3, 00:09:21.883 "base_bdevs_list": [ 00:09:21.883 { 00:09:21.883 "name": "BaseBdev1", 00:09:21.883 "uuid": "a06f441c-149b-45bd-b431-6334493832c8", 00:09:21.883 "is_configured": true, 00:09:21.883 "data_offset": 2048, 00:09:21.883 "data_size": 63488 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "name": "BaseBdev2", 00:09:21.883 "uuid": "72d653bc-82eb-440a-8fbe-c1631ed9a258", 00:09:21.883 "is_configured": true, 00:09:21.883 "data_offset": 2048, 00:09:21.883 "data_size": 63488 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "name": "BaseBdev3", 00:09:21.883 "uuid": "d5628344-5b4a-4e91-aa62-9bb3dad1d45a", 00:09:21.883 "is_configured": true, 00:09:21.883 "data_offset": 2048, 00:09:21.883 "data_size": 63488 00:09:21.883 } 00:09:21.883 ] 00:09:21.883 } 00:09:21.883 } 00:09:21.883 }' 00:09:21.883 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.883 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:21.883 BaseBdev2 00:09:21.883 BaseBdev3' 00:09:21.883 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 [2024-11-20 10:32:25.504966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.141 [2024-11-20 10:32:25.505000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.141 [2024-11-20 10:32:25.505061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.141 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.142 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.401 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.401 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.401 "name": "Existed_Raid", 00:09:22.401 "uuid": "479f25b2-2877-47fb-945c-5085658c1677", 00:09:22.401 "strip_size_kb": 64, 00:09:22.401 "state": "offline", 00:09:22.401 "raid_level": "raid0", 00:09:22.401 "superblock": true, 00:09:22.401 "num_base_bdevs": 3, 00:09:22.401 "num_base_bdevs_discovered": 2, 00:09:22.401 "num_base_bdevs_operational": 2, 00:09:22.401 "base_bdevs_list": [ 00:09:22.401 { 00:09:22.401 "name": null, 00:09:22.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.401 "is_configured": false, 00:09:22.401 "data_offset": 0, 00:09:22.401 "data_size": 63488 00:09:22.401 }, 00:09:22.401 { 00:09:22.401 "name": "BaseBdev2", 00:09:22.401 "uuid": "72d653bc-82eb-440a-8fbe-c1631ed9a258", 00:09:22.401 "is_configured": true, 00:09:22.401 "data_offset": 2048, 00:09:22.401 "data_size": 63488 00:09:22.401 }, 00:09:22.401 { 00:09:22.401 "name": "BaseBdev3", 00:09:22.401 "uuid": "d5628344-5b4a-4e91-aa62-9bb3dad1d45a", 00:09:22.401 "is_configured": true, 00:09:22.401 "data_offset": 2048, 00:09:22.401 "data_size": 63488 00:09:22.401 } 00:09:22.401 ] 00:09:22.401 }' 00:09:22.401 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.401 10:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.661 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 [2024-11-20 10:32:26.153699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.919 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:22.920 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.920 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.920 [2024-11-20 10:32:26.316466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.920 [2024-11-20 10:32:26.316570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.180 BaseBdev2 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.180 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.180 [ 00:09:23.180 { 00:09:23.180 "name": "BaseBdev2", 00:09:23.180 "aliases": [ 00:09:23.180 "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd" 00:09:23.180 ], 00:09:23.180 "product_name": "Malloc disk", 00:09:23.180 "block_size": 512, 00:09:23.180 "num_blocks": 65536, 00:09:23.180 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:23.180 "assigned_rate_limits": { 00:09:23.180 "rw_ios_per_sec": 0, 00:09:23.180 "rw_mbytes_per_sec": 0, 00:09:23.180 "r_mbytes_per_sec": 0, 00:09:23.180 "w_mbytes_per_sec": 0 00:09:23.180 }, 00:09:23.180 "claimed": false, 00:09:23.180 "zoned": false, 00:09:23.180 "supported_io_types": { 00:09:23.180 "read": true, 00:09:23.180 "write": true, 00:09:23.180 "unmap": true, 00:09:23.180 "flush": true, 00:09:23.180 "reset": true, 00:09:23.180 "nvme_admin": false, 00:09:23.180 "nvme_io": false, 00:09:23.180 "nvme_io_md": false, 00:09:23.180 "write_zeroes": true, 00:09:23.180 "zcopy": true, 00:09:23.180 "get_zone_info": false, 00:09:23.180 "zone_management": false, 00:09:23.180 "zone_append": false, 00:09:23.180 "compare": false, 00:09:23.180 "compare_and_write": false, 00:09:23.180 "abort": true, 00:09:23.181 "seek_hole": false, 00:09:23.181 "seek_data": false, 00:09:23.181 "copy": true, 00:09:23.181 "nvme_iov_md": false 00:09:23.181 }, 00:09:23.181 "memory_domains": [ 00:09:23.181 { 00:09:23.181 "dma_device_id": "system", 00:09:23.181 "dma_device_type": 1 00:09:23.181 }, 00:09:23.181 { 00:09:23.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.181 "dma_device_type": 2 00:09:23.181 } 00:09:23.181 ], 00:09:23.181 "driver_specific": {} 00:09:23.181 } 00:09:23.181 ] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.181 BaseBdev3 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.181 [ 00:09:23.181 { 00:09:23.181 "name": "BaseBdev3", 00:09:23.181 "aliases": [ 00:09:23.181 "893066d8-6ceb-4701-a50e-b6ec9a4d4278" 00:09:23.181 ], 00:09:23.181 "product_name": "Malloc disk", 00:09:23.181 "block_size": 512, 00:09:23.181 "num_blocks": 65536, 00:09:23.181 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:23.181 "assigned_rate_limits": { 00:09:23.181 "rw_ios_per_sec": 0, 00:09:23.181 "rw_mbytes_per_sec": 0, 00:09:23.181 "r_mbytes_per_sec": 0, 00:09:23.181 "w_mbytes_per_sec": 0 00:09:23.181 }, 00:09:23.181 "claimed": false, 00:09:23.181 "zoned": false, 00:09:23.181 "supported_io_types": { 00:09:23.181 "read": true, 00:09:23.181 "write": true, 00:09:23.181 "unmap": true, 00:09:23.181 "flush": true, 00:09:23.181 "reset": true, 00:09:23.181 "nvme_admin": false, 00:09:23.181 "nvme_io": false, 00:09:23.181 "nvme_io_md": false, 00:09:23.181 "write_zeroes": true, 00:09:23.181 "zcopy": true, 00:09:23.181 "get_zone_info": false, 00:09:23.181 "zone_management": false, 00:09:23.181 "zone_append": false, 00:09:23.181 "compare": false, 00:09:23.181 "compare_and_write": false, 00:09:23.181 "abort": true, 00:09:23.181 "seek_hole": false, 00:09:23.181 "seek_data": false, 00:09:23.181 "copy": true, 00:09:23.181 "nvme_iov_md": false 00:09:23.181 }, 00:09:23.181 "memory_domains": [ 00:09:23.181 { 00:09:23.181 "dma_device_id": "system", 00:09:23.181 "dma_device_type": 1 00:09:23.181 }, 00:09:23.181 { 00:09:23.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.181 "dma_device_type": 2 00:09:23.181 } 00:09:23.181 ], 00:09:23.181 "driver_specific": {} 00:09:23.181 } 00:09:23.181 ] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.181 [2024-11-20 10:32:26.634266] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.181 [2024-11-20 10:32:26.634387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.181 [2024-11-20 10:32:26.634444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.181 [2024-11-20 10:32:26.636411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.181 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.440 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.440 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.440 "name": "Existed_Raid", 00:09:23.440 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:23.440 "strip_size_kb": 64, 00:09:23.440 "state": "configuring", 00:09:23.440 "raid_level": "raid0", 00:09:23.440 "superblock": true, 00:09:23.440 "num_base_bdevs": 3, 00:09:23.440 "num_base_bdevs_discovered": 2, 00:09:23.440 "num_base_bdevs_operational": 3, 00:09:23.440 "base_bdevs_list": [ 00:09:23.440 { 00:09:23.440 "name": "BaseBdev1", 00:09:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.440 "is_configured": false, 00:09:23.440 "data_offset": 0, 00:09:23.440 "data_size": 0 00:09:23.440 }, 00:09:23.440 { 00:09:23.440 "name": "BaseBdev2", 00:09:23.440 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:23.440 "is_configured": true, 00:09:23.440 "data_offset": 2048, 00:09:23.440 "data_size": 63488 00:09:23.440 }, 00:09:23.440 { 00:09:23.440 "name": "BaseBdev3", 00:09:23.440 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:23.440 "is_configured": true, 00:09:23.440 "data_offset": 2048, 00:09:23.440 "data_size": 63488 00:09:23.440 } 00:09:23.440 ] 00:09:23.440 }' 00:09:23.440 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.440 10:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.700 [2024-11-20 10:32:27.089460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.700 "name": "Existed_Raid", 00:09:23.700 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:23.700 "strip_size_kb": 64, 00:09:23.700 "state": "configuring", 00:09:23.700 "raid_level": "raid0", 00:09:23.700 "superblock": true, 00:09:23.700 "num_base_bdevs": 3, 00:09:23.700 "num_base_bdevs_discovered": 1, 00:09:23.700 "num_base_bdevs_operational": 3, 00:09:23.700 "base_bdevs_list": [ 00:09:23.700 { 00:09:23.700 "name": "BaseBdev1", 00:09:23.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.700 "is_configured": false, 00:09:23.700 "data_offset": 0, 00:09:23.700 "data_size": 0 00:09:23.700 }, 00:09:23.700 { 00:09:23.700 "name": null, 00:09:23.700 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:23.700 "is_configured": false, 00:09:23.700 "data_offset": 0, 00:09:23.700 "data_size": 63488 00:09:23.700 }, 00:09:23.700 { 00:09:23.700 "name": "BaseBdev3", 00:09:23.700 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:23.700 "is_configured": true, 00:09:23.700 "data_offset": 2048, 00:09:23.700 "data_size": 63488 00:09:23.700 } 00:09:23.700 ] 00:09:23.700 }' 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.700 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.274 [2024-11-20 10:32:27.575128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.274 BaseBdev1 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.274 [ 00:09:24.274 { 00:09:24.274 "name": "BaseBdev1", 00:09:24.274 "aliases": [ 00:09:24.274 "5651f5d4-a042-4b96-9a33-03b04829ddfe" 00:09:24.274 ], 00:09:24.274 "product_name": "Malloc disk", 00:09:24.274 "block_size": 512, 00:09:24.274 "num_blocks": 65536, 00:09:24.274 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:24.274 "assigned_rate_limits": { 00:09:24.274 "rw_ios_per_sec": 0, 00:09:24.274 "rw_mbytes_per_sec": 0, 00:09:24.274 "r_mbytes_per_sec": 0, 00:09:24.274 "w_mbytes_per_sec": 0 00:09:24.274 }, 00:09:24.274 "claimed": true, 00:09:24.274 "claim_type": "exclusive_write", 00:09:24.274 "zoned": false, 00:09:24.274 "supported_io_types": { 00:09:24.274 "read": true, 00:09:24.274 "write": true, 00:09:24.274 "unmap": true, 00:09:24.274 "flush": true, 00:09:24.274 "reset": true, 00:09:24.274 "nvme_admin": false, 00:09:24.274 "nvme_io": false, 00:09:24.274 "nvme_io_md": false, 00:09:24.274 "write_zeroes": true, 00:09:24.274 "zcopy": true, 00:09:24.274 "get_zone_info": false, 00:09:24.274 "zone_management": false, 00:09:24.274 "zone_append": false, 00:09:24.274 "compare": false, 00:09:24.274 "compare_and_write": false, 00:09:24.274 "abort": true, 00:09:24.274 "seek_hole": false, 00:09:24.274 "seek_data": false, 00:09:24.274 "copy": true, 00:09:24.274 "nvme_iov_md": false 00:09:24.274 }, 00:09:24.274 "memory_domains": [ 00:09:24.274 { 00:09:24.274 "dma_device_id": "system", 00:09:24.274 "dma_device_type": 1 00:09:24.274 }, 00:09:24.274 { 00:09:24.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.274 "dma_device_type": 2 00:09:24.274 } 00:09:24.274 ], 00:09:24.274 "driver_specific": {} 00:09:24.274 } 00:09:24.274 ] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.274 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.274 "name": "Existed_Raid", 00:09:24.274 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:24.274 "strip_size_kb": 64, 00:09:24.274 "state": "configuring", 00:09:24.274 "raid_level": "raid0", 00:09:24.274 "superblock": true, 00:09:24.274 "num_base_bdevs": 3, 00:09:24.274 "num_base_bdevs_discovered": 2, 00:09:24.274 "num_base_bdevs_operational": 3, 00:09:24.274 "base_bdevs_list": [ 00:09:24.274 { 00:09:24.274 "name": "BaseBdev1", 00:09:24.274 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:24.274 "is_configured": true, 00:09:24.274 "data_offset": 2048, 00:09:24.274 "data_size": 63488 00:09:24.274 }, 00:09:24.274 { 00:09:24.274 "name": null, 00:09:24.274 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:24.274 "is_configured": false, 00:09:24.274 "data_offset": 0, 00:09:24.274 "data_size": 63488 00:09:24.274 }, 00:09:24.274 { 00:09:24.274 "name": "BaseBdev3", 00:09:24.274 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:24.275 "is_configured": true, 00:09:24.275 "data_offset": 2048, 00:09:24.275 "data_size": 63488 00:09:24.275 } 00:09:24.275 ] 00:09:24.275 }' 00:09:24.275 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.275 10:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.844 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.844 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.844 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.844 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.844 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.844 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.845 [2024-11-20 10:32:28.110304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.845 "name": "Existed_Raid", 00:09:24.845 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:24.845 "strip_size_kb": 64, 00:09:24.845 "state": "configuring", 00:09:24.845 "raid_level": "raid0", 00:09:24.845 "superblock": true, 00:09:24.845 "num_base_bdevs": 3, 00:09:24.845 "num_base_bdevs_discovered": 1, 00:09:24.845 "num_base_bdevs_operational": 3, 00:09:24.845 "base_bdevs_list": [ 00:09:24.845 { 00:09:24.845 "name": "BaseBdev1", 00:09:24.845 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:24.845 "is_configured": true, 00:09:24.845 "data_offset": 2048, 00:09:24.845 "data_size": 63488 00:09:24.845 }, 00:09:24.845 { 00:09:24.845 "name": null, 00:09:24.845 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:24.845 "is_configured": false, 00:09:24.845 "data_offset": 0, 00:09:24.845 "data_size": 63488 00:09:24.845 }, 00:09:24.845 { 00:09:24.845 "name": null, 00:09:24.845 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:24.845 "is_configured": false, 00:09:24.845 "data_offset": 0, 00:09:24.845 "data_size": 63488 00:09:24.845 } 00:09:24.845 ] 00:09:24.845 }' 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.845 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 [2024-11-20 10:32:28.633472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.416 "name": "Existed_Raid", 00:09:25.416 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:25.416 "strip_size_kb": 64, 00:09:25.416 "state": "configuring", 00:09:25.416 "raid_level": "raid0", 00:09:25.416 "superblock": true, 00:09:25.416 "num_base_bdevs": 3, 00:09:25.416 "num_base_bdevs_discovered": 2, 00:09:25.416 "num_base_bdevs_operational": 3, 00:09:25.416 "base_bdevs_list": [ 00:09:25.416 { 00:09:25.416 "name": "BaseBdev1", 00:09:25.416 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:25.416 "is_configured": true, 00:09:25.416 "data_offset": 2048, 00:09:25.416 "data_size": 63488 00:09:25.416 }, 00:09:25.416 { 00:09:25.416 "name": null, 00:09:25.416 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:25.416 "is_configured": false, 00:09:25.416 "data_offset": 0, 00:09:25.416 "data_size": 63488 00:09:25.416 }, 00:09:25.416 { 00:09:25.416 "name": "BaseBdev3", 00:09:25.416 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:25.416 "is_configured": true, 00:09:25.416 "data_offset": 2048, 00:09:25.416 "data_size": 63488 00:09:25.416 } 00:09:25.416 ] 00:09:25.416 }' 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.416 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.676 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.676 [2024-11-20 10:32:29.064742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.936 "name": "Existed_Raid", 00:09:25.936 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:25.936 "strip_size_kb": 64, 00:09:25.936 "state": "configuring", 00:09:25.936 "raid_level": "raid0", 00:09:25.936 "superblock": true, 00:09:25.936 "num_base_bdevs": 3, 00:09:25.936 "num_base_bdevs_discovered": 1, 00:09:25.936 "num_base_bdevs_operational": 3, 00:09:25.936 "base_bdevs_list": [ 00:09:25.936 { 00:09:25.936 "name": null, 00:09:25.936 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:25.936 "is_configured": false, 00:09:25.936 "data_offset": 0, 00:09:25.936 "data_size": 63488 00:09:25.936 }, 00:09:25.936 { 00:09:25.936 "name": null, 00:09:25.936 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:25.936 "is_configured": false, 00:09:25.936 "data_offset": 0, 00:09:25.936 "data_size": 63488 00:09:25.936 }, 00:09:25.936 { 00:09:25.936 "name": "BaseBdev3", 00:09:25.936 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:25.936 "is_configured": true, 00:09:25.936 "data_offset": 2048, 00:09:25.936 "data_size": 63488 00:09:25.936 } 00:09:25.936 ] 00:09:25.936 }' 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.936 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.196 [2024-11-20 10:32:29.656227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.196 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.456 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.456 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.456 "name": "Existed_Raid", 00:09:26.456 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:26.456 "strip_size_kb": 64, 00:09:26.456 "state": "configuring", 00:09:26.456 "raid_level": "raid0", 00:09:26.456 "superblock": true, 00:09:26.456 "num_base_bdevs": 3, 00:09:26.456 "num_base_bdevs_discovered": 2, 00:09:26.456 "num_base_bdevs_operational": 3, 00:09:26.456 "base_bdevs_list": [ 00:09:26.456 { 00:09:26.456 "name": null, 00:09:26.456 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:26.456 "is_configured": false, 00:09:26.456 "data_offset": 0, 00:09:26.456 "data_size": 63488 00:09:26.456 }, 00:09:26.456 { 00:09:26.456 "name": "BaseBdev2", 00:09:26.456 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:26.456 "is_configured": true, 00:09:26.456 "data_offset": 2048, 00:09:26.456 "data_size": 63488 00:09:26.456 }, 00:09:26.456 { 00:09:26.456 "name": "BaseBdev3", 00:09:26.456 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:26.456 "is_configured": true, 00:09:26.456 "data_offset": 2048, 00:09:26.456 "data_size": 63488 00:09:26.456 } 00:09:26.456 ] 00:09:26.456 }' 00:09:26.456 10:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.456 10:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5651f5d4-a042-4b96-9a33-03b04829ddfe 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.716 NewBaseBdev 00:09:26.716 [2024-11-20 10:32:30.185685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:26.716 [2024-11-20 10:32:30.185945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:26.716 [2024-11-20 10:32:30.185964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.716 [2024-11-20 10:32:30.186235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:26.716 [2024-11-20 10:32:30.186418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:26.716 [2024-11-20 10:32:30.186431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:26.716 [2024-11-20 10:32:30.186598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.716 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.976 [ 00:09:26.976 { 00:09:26.976 "name": "NewBaseBdev", 00:09:26.976 "aliases": [ 00:09:26.976 "5651f5d4-a042-4b96-9a33-03b04829ddfe" 00:09:26.976 ], 00:09:26.976 "product_name": "Malloc disk", 00:09:26.976 "block_size": 512, 00:09:26.976 "num_blocks": 65536, 00:09:26.976 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:26.976 "assigned_rate_limits": { 00:09:26.976 "rw_ios_per_sec": 0, 00:09:26.976 "rw_mbytes_per_sec": 0, 00:09:26.976 "r_mbytes_per_sec": 0, 00:09:26.976 "w_mbytes_per_sec": 0 00:09:26.976 }, 00:09:26.976 "claimed": true, 00:09:26.976 "claim_type": "exclusive_write", 00:09:26.976 "zoned": false, 00:09:26.976 "supported_io_types": { 00:09:26.976 "read": true, 00:09:26.976 "write": true, 00:09:26.976 "unmap": true, 00:09:26.976 "flush": true, 00:09:26.976 "reset": true, 00:09:26.976 "nvme_admin": false, 00:09:26.976 "nvme_io": false, 00:09:26.976 "nvme_io_md": false, 00:09:26.976 "write_zeroes": true, 00:09:26.976 "zcopy": true, 00:09:26.976 "get_zone_info": false, 00:09:26.976 "zone_management": false, 00:09:26.976 "zone_append": false, 00:09:26.976 "compare": false, 00:09:26.976 "compare_and_write": false, 00:09:26.976 "abort": true, 00:09:26.976 "seek_hole": false, 00:09:26.976 "seek_data": false, 00:09:26.976 "copy": true, 00:09:26.976 "nvme_iov_md": false 00:09:26.976 }, 00:09:26.976 "memory_domains": [ 00:09:26.976 { 00:09:26.976 "dma_device_id": "system", 00:09:26.976 "dma_device_type": 1 00:09:26.976 }, 00:09:26.976 { 00:09:26.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.976 "dma_device_type": 2 00:09:26.976 } 00:09:26.976 ], 00:09:26.976 "driver_specific": {} 00:09:26.976 } 00:09:26.976 ] 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.976 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.976 "name": "Existed_Raid", 00:09:26.976 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:26.976 "strip_size_kb": 64, 00:09:26.976 "state": "online", 00:09:26.976 "raid_level": "raid0", 00:09:26.976 "superblock": true, 00:09:26.976 "num_base_bdevs": 3, 00:09:26.976 "num_base_bdevs_discovered": 3, 00:09:26.976 "num_base_bdevs_operational": 3, 00:09:26.976 "base_bdevs_list": [ 00:09:26.976 { 00:09:26.976 "name": "NewBaseBdev", 00:09:26.976 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:26.976 "is_configured": true, 00:09:26.976 "data_offset": 2048, 00:09:26.976 "data_size": 63488 00:09:26.976 }, 00:09:26.976 { 00:09:26.976 "name": "BaseBdev2", 00:09:26.976 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:26.976 "is_configured": true, 00:09:26.976 "data_offset": 2048, 00:09:26.976 "data_size": 63488 00:09:26.977 }, 00:09:26.977 { 00:09:26.977 "name": "BaseBdev3", 00:09:26.977 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:26.977 "is_configured": true, 00:09:26.977 "data_offset": 2048, 00:09:26.977 "data_size": 63488 00:09:26.977 } 00:09:26.977 ] 00:09:26.977 }' 00:09:26.977 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.977 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.238 [2024-11-20 10:32:30.669216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.238 "name": "Existed_Raid", 00:09:27.238 "aliases": [ 00:09:27.238 "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20" 00:09:27.238 ], 00:09:27.238 "product_name": "Raid Volume", 00:09:27.238 "block_size": 512, 00:09:27.238 "num_blocks": 190464, 00:09:27.238 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:27.238 "assigned_rate_limits": { 00:09:27.238 "rw_ios_per_sec": 0, 00:09:27.238 "rw_mbytes_per_sec": 0, 00:09:27.238 "r_mbytes_per_sec": 0, 00:09:27.238 "w_mbytes_per_sec": 0 00:09:27.238 }, 00:09:27.238 "claimed": false, 00:09:27.238 "zoned": false, 00:09:27.238 "supported_io_types": { 00:09:27.238 "read": true, 00:09:27.238 "write": true, 00:09:27.238 "unmap": true, 00:09:27.238 "flush": true, 00:09:27.238 "reset": true, 00:09:27.238 "nvme_admin": false, 00:09:27.238 "nvme_io": false, 00:09:27.238 "nvme_io_md": false, 00:09:27.238 "write_zeroes": true, 00:09:27.238 "zcopy": false, 00:09:27.238 "get_zone_info": false, 00:09:27.238 "zone_management": false, 00:09:27.238 "zone_append": false, 00:09:27.238 "compare": false, 00:09:27.238 "compare_and_write": false, 00:09:27.238 "abort": false, 00:09:27.238 "seek_hole": false, 00:09:27.238 "seek_data": false, 00:09:27.238 "copy": false, 00:09:27.238 "nvme_iov_md": false 00:09:27.238 }, 00:09:27.238 "memory_domains": [ 00:09:27.238 { 00:09:27.238 "dma_device_id": "system", 00:09:27.238 "dma_device_type": 1 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.238 "dma_device_type": 2 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "dma_device_id": "system", 00:09:27.238 "dma_device_type": 1 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.238 "dma_device_type": 2 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "dma_device_id": "system", 00:09:27.238 "dma_device_type": 1 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.238 "dma_device_type": 2 00:09:27.238 } 00:09:27.238 ], 00:09:27.238 "driver_specific": { 00:09:27.238 "raid": { 00:09:27.238 "uuid": "2a9d6eb3-ef93-4cb1-b18a-67b321b20d20", 00:09:27.238 "strip_size_kb": 64, 00:09:27.238 "state": "online", 00:09:27.238 "raid_level": "raid0", 00:09:27.238 "superblock": true, 00:09:27.238 "num_base_bdevs": 3, 00:09:27.238 "num_base_bdevs_discovered": 3, 00:09:27.238 "num_base_bdevs_operational": 3, 00:09:27.238 "base_bdevs_list": [ 00:09:27.238 { 00:09:27.238 "name": "NewBaseBdev", 00:09:27.238 "uuid": "5651f5d4-a042-4b96-9a33-03b04829ddfe", 00:09:27.238 "is_configured": true, 00:09:27.238 "data_offset": 2048, 00:09:27.238 "data_size": 63488 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "name": "BaseBdev2", 00:09:27.238 "uuid": "3471ecd6-b1b8-4a02-823e-2f616cfbcfcd", 00:09:27.238 "is_configured": true, 00:09:27.238 "data_offset": 2048, 00:09:27.238 "data_size": 63488 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "name": "BaseBdev3", 00:09:27.238 "uuid": "893066d8-6ceb-4701-a50e-b6ec9a4d4278", 00:09:27.238 "is_configured": true, 00:09:27.238 "data_offset": 2048, 00:09:27.238 "data_size": 63488 00:09:27.238 } 00:09:27.238 ] 00:09:27.238 } 00:09:27.238 } 00:09:27.238 }' 00:09:27.238 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:27.512 BaseBdev2 00:09:27.512 BaseBdev3' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.512 [2024-11-20 10:32:30.952446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.512 [2024-11-20 10:32:30.952473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.512 [2024-11-20 10:32:30.952554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.512 [2024-11-20 10:32:30.952607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.512 [2024-11-20 10:32:30.952619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64613 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64613 ']' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64613 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.512 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64613 00:09:27.771 killing process with pid 64613 00:09:27.771 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.771 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.771 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64613' 00:09:27.771 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64613 00:09:27.771 [2024-11-20 10:32:31.000631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.771 10:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64613 00:09:28.030 [2024-11-20 10:32:31.303184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.409 10:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.409 00:09:29.409 real 0m10.588s 00:09:29.409 user 0m16.825s 00:09:29.409 sys 0m1.840s 00:09:29.409 ************************************ 00:09:29.409 END TEST raid_state_function_test_sb 00:09:29.409 ************************************ 00:09:29.409 10:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.409 10:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.409 10:32:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:29.409 10:32:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.409 10:32:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.409 10:32:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.409 ************************************ 00:09:29.409 START TEST raid_superblock_test 00:09:29.409 ************************************ 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:29.409 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65233 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65233 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65233 ']' 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.410 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.410 [2024-11-20 10:32:32.633059] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:29.410 [2024-11-20 10:32:32.633329] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65233 ] 00:09:29.410 [2024-11-20 10:32:32.835505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.666 [2024-11-20 10:32:32.951980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.922 [2024-11-20 10:32:33.160068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.922 [2024-11-20 10:32:33.160135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.180 malloc1 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.180 [2024-11-20 10:32:33.597539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.180 [2024-11-20 10:32:33.597651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.180 [2024-11-20 10:32:33.597694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:30.180 [2024-11-20 10:32:33.597724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.180 [2024-11-20 10:32:33.599972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.180 [2024-11-20 10:32:33.600051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.180 pt1 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.180 malloc2 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.180 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.180 [2024-11-20 10:32:33.654155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.180 [2024-11-20 10:32:33.654218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.180 [2024-11-20 10:32:33.654242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:30.180 [2024-11-20 10:32:33.654251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.438 [2024-11-20 10:32:33.656659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.438 [2024-11-20 10:32:33.656701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.438 pt2 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.438 malloc3 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.438 [2024-11-20 10:32:33.728260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.438 [2024-11-20 10:32:33.728397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.438 [2024-11-20 10:32:33.728447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:30.438 [2024-11-20 10:32:33.728490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.438 [2024-11-20 10:32:33.730933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.438 [2024-11-20 10:32:33.731011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.438 pt3 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.438 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.438 [2024-11-20 10:32:33.740310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.439 [2024-11-20 10:32:33.742443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.439 [2024-11-20 10:32:33.742551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.439 [2024-11-20 10:32:33.742765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:30.439 [2024-11-20 10:32:33.742819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.439 [2024-11-20 10:32:33.743138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.439 [2024-11-20 10:32:33.743400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:30.439 [2024-11-20 10:32:33.743449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:30.439 [2024-11-20 10:32:33.743686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.439 "name": "raid_bdev1", 00:09:30.439 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:30.439 "strip_size_kb": 64, 00:09:30.439 "state": "online", 00:09:30.439 "raid_level": "raid0", 00:09:30.439 "superblock": true, 00:09:30.439 "num_base_bdevs": 3, 00:09:30.439 "num_base_bdevs_discovered": 3, 00:09:30.439 "num_base_bdevs_operational": 3, 00:09:30.439 "base_bdevs_list": [ 00:09:30.439 { 00:09:30.439 "name": "pt1", 00:09:30.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.439 "is_configured": true, 00:09:30.439 "data_offset": 2048, 00:09:30.439 "data_size": 63488 00:09:30.439 }, 00:09:30.439 { 00:09:30.439 "name": "pt2", 00:09:30.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.439 "is_configured": true, 00:09:30.439 "data_offset": 2048, 00:09:30.439 "data_size": 63488 00:09:30.439 }, 00:09:30.439 { 00:09:30.439 "name": "pt3", 00:09:30.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.439 "is_configured": true, 00:09:30.439 "data_offset": 2048, 00:09:30.439 "data_size": 63488 00:09:30.439 } 00:09:30.439 ] 00:09:30.439 }' 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.439 10:32:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.004 [2024-11-20 10:32:34.191897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.004 "name": "raid_bdev1", 00:09:31.004 "aliases": [ 00:09:31.004 "163da87c-50f8-4bd3-90ee-61d6448c75e7" 00:09:31.004 ], 00:09:31.004 "product_name": "Raid Volume", 00:09:31.004 "block_size": 512, 00:09:31.004 "num_blocks": 190464, 00:09:31.004 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:31.004 "assigned_rate_limits": { 00:09:31.004 "rw_ios_per_sec": 0, 00:09:31.004 "rw_mbytes_per_sec": 0, 00:09:31.004 "r_mbytes_per_sec": 0, 00:09:31.004 "w_mbytes_per_sec": 0 00:09:31.004 }, 00:09:31.004 "claimed": false, 00:09:31.004 "zoned": false, 00:09:31.004 "supported_io_types": { 00:09:31.004 "read": true, 00:09:31.004 "write": true, 00:09:31.004 "unmap": true, 00:09:31.004 "flush": true, 00:09:31.004 "reset": true, 00:09:31.004 "nvme_admin": false, 00:09:31.004 "nvme_io": false, 00:09:31.004 "nvme_io_md": false, 00:09:31.004 "write_zeroes": true, 00:09:31.004 "zcopy": false, 00:09:31.004 "get_zone_info": false, 00:09:31.004 "zone_management": false, 00:09:31.004 "zone_append": false, 00:09:31.004 "compare": false, 00:09:31.004 "compare_and_write": false, 00:09:31.004 "abort": false, 00:09:31.004 "seek_hole": false, 00:09:31.004 "seek_data": false, 00:09:31.004 "copy": false, 00:09:31.004 "nvme_iov_md": false 00:09:31.004 }, 00:09:31.004 "memory_domains": [ 00:09:31.004 { 00:09:31.004 "dma_device_id": "system", 00:09:31.004 "dma_device_type": 1 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.004 "dma_device_type": 2 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "dma_device_id": "system", 00:09:31.004 "dma_device_type": 1 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.004 "dma_device_type": 2 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "dma_device_id": "system", 00:09:31.004 "dma_device_type": 1 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.004 "dma_device_type": 2 00:09:31.004 } 00:09:31.004 ], 00:09:31.004 "driver_specific": { 00:09:31.004 "raid": { 00:09:31.004 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:31.004 "strip_size_kb": 64, 00:09:31.004 "state": "online", 00:09:31.004 "raid_level": "raid0", 00:09:31.004 "superblock": true, 00:09:31.004 "num_base_bdevs": 3, 00:09:31.004 "num_base_bdevs_discovered": 3, 00:09:31.004 "num_base_bdevs_operational": 3, 00:09:31.004 "base_bdevs_list": [ 00:09:31.004 { 00:09:31.004 "name": "pt1", 00:09:31.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.004 "is_configured": true, 00:09:31.004 "data_offset": 2048, 00:09:31.004 "data_size": 63488 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "name": "pt2", 00:09:31.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.004 "is_configured": true, 00:09:31.004 "data_offset": 2048, 00:09:31.004 "data_size": 63488 00:09:31.004 }, 00:09:31.004 { 00:09:31.004 "name": "pt3", 00:09:31.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.004 "is_configured": true, 00:09:31.004 "data_offset": 2048, 00:09:31.004 "data_size": 63488 00:09:31.004 } 00:09:31.004 ] 00:09:31.004 } 00:09:31.004 } 00:09:31.004 }' 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:31.004 pt2 00:09:31.004 pt3' 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.004 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.005 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.005 [2024-11-20 10:32:34.467431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=163da87c-50f8-4bd3-90ee-61d6448c75e7 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 163da87c-50f8-4bd3-90ee-61d6448c75e7 ']' 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 [2024-11-20 10:32:34.514992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.263 [2024-11-20 10:32:34.515074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.263 [2024-11-20 10:32:34.515169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.263 [2024-11-20 10:32:34.515253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.263 [2024-11-20 10:32:34.515264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.263 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 [2024-11-20 10:32:34.646880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:31.263 [2024-11-20 10:32:34.649328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:31.263 [2024-11-20 10:32:34.649406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:31.263 [2024-11-20 10:32:34.649464] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:31.264 [2024-11-20 10:32:34.649521] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:31.264 [2024-11-20 10:32:34.649543] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:31.264 [2024-11-20 10:32:34.649561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.264 [2024-11-20 10:32:34.649573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:31.264 request: 00:09:31.264 { 00:09:31.264 "name": "raid_bdev1", 00:09:31.264 "raid_level": "raid0", 00:09:31.264 "base_bdevs": [ 00:09:31.264 "malloc1", 00:09:31.264 "malloc2", 00:09:31.264 "malloc3" 00:09:31.264 ], 00:09:31.264 "strip_size_kb": 64, 00:09:31.264 "superblock": false, 00:09:31.264 "method": "bdev_raid_create", 00:09:31.264 "req_id": 1 00:09:31.264 } 00:09:31.264 Got JSON-RPC error response 00:09:31.264 response: 00:09:31.264 { 00:09:31.264 "code": -17, 00:09:31.264 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:31.264 } 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.264 [2024-11-20 10:32:34.710703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:31.264 [2024-11-20 10:32:34.710857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.264 [2024-11-20 10:32:34.710921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:31.264 [2024-11-20 10:32:34.710968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.264 [2024-11-20 10:32:34.713668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.264 [2024-11-20 10:32:34.713767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:31.264 [2024-11-20 10:32:34.713930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:31.264 [2024-11-20 10:32:34.714044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:31.264 pt1 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.264 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.522 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.522 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.522 "name": "raid_bdev1", 00:09:31.522 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:31.522 "strip_size_kb": 64, 00:09:31.522 "state": "configuring", 00:09:31.522 "raid_level": "raid0", 00:09:31.522 "superblock": true, 00:09:31.522 "num_base_bdevs": 3, 00:09:31.522 "num_base_bdevs_discovered": 1, 00:09:31.522 "num_base_bdevs_operational": 3, 00:09:31.522 "base_bdevs_list": [ 00:09:31.522 { 00:09:31.522 "name": "pt1", 00:09:31.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.522 "is_configured": true, 00:09:31.522 "data_offset": 2048, 00:09:31.522 "data_size": 63488 00:09:31.522 }, 00:09:31.522 { 00:09:31.522 "name": null, 00:09:31.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.522 "is_configured": false, 00:09:31.522 "data_offset": 2048, 00:09:31.522 "data_size": 63488 00:09:31.522 }, 00:09:31.522 { 00:09:31.522 "name": null, 00:09:31.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.522 "is_configured": false, 00:09:31.522 "data_offset": 2048, 00:09:31.522 "data_size": 63488 00:09:31.522 } 00:09:31.522 ] 00:09:31.522 }' 00:09:31.522 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.522 10:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 [2024-11-20 10:32:35.173944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.780 [2024-11-20 10:32:35.174022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.780 [2024-11-20 10:32:35.174046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:31.780 [2024-11-20 10:32:35.174057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.780 [2024-11-20 10:32:35.174597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.780 [2024-11-20 10:32:35.174632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.780 [2024-11-20 10:32:35.174730] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.780 [2024-11-20 10:32:35.174756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.780 pt2 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 [2024-11-20 10:32:35.185923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.780 "name": "raid_bdev1", 00:09:31.780 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:31.780 "strip_size_kb": 64, 00:09:31.780 "state": "configuring", 00:09:31.780 "raid_level": "raid0", 00:09:31.780 "superblock": true, 00:09:31.780 "num_base_bdevs": 3, 00:09:31.780 "num_base_bdevs_discovered": 1, 00:09:31.780 "num_base_bdevs_operational": 3, 00:09:31.780 "base_bdevs_list": [ 00:09:31.780 { 00:09:31.780 "name": "pt1", 00:09:31.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.780 "is_configured": true, 00:09:31.780 "data_offset": 2048, 00:09:31.780 "data_size": 63488 00:09:31.780 }, 00:09:31.780 { 00:09:31.780 "name": null, 00:09:31.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.780 "is_configured": false, 00:09:31.780 "data_offset": 0, 00:09:31.780 "data_size": 63488 00:09:31.780 }, 00:09:31.780 { 00:09:31.780 "name": null, 00:09:31.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.780 "is_configured": false, 00:09:31.780 "data_offset": 2048, 00:09:31.780 "data_size": 63488 00:09:31.780 } 00:09:31.780 ] 00:09:31.780 }' 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.780 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.346 [2024-11-20 10:32:35.681044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.346 [2024-11-20 10:32:35.681123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.346 [2024-11-20 10:32:35.681143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:32.346 [2024-11-20 10:32:35.681154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.346 [2024-11-20 10:32:35.681653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.346 [2024-11-20 10:32:35.681684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.346 [2024-11-20 10:32:35.681780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:32.346 [2024-11-20 10:32:35.681809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.346 pt2 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.346 [2024-11-20 10:32:35.693037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:32.346 [2024-11-20 10:32:35.693095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.346 [2024-11-20 10:32:35.693111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:32.346 [2024-11-20 10:32:35.693122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.346 [2024-11-20 10:32:35.693536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.346 [2024-11-20 10:32:35.693566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:32.346 [2024-11-20 10:32:35.693641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:32.346 [2024-11-20 10:32:35.693664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:32.346 [2024-11-20 10:32:35.693783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.346 [2024-11-20 10:32:35.693794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:32.346 [2024-11-20 10:32:35.694035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:32.346 [2024-11-20 10:32:35.694164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.346 [2024-11-20 10:32:35.694179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:32.346 [2024-11-20 10:32:35.694312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.346 pt3 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.346 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.346 "name": "raid_bdev1", 00:09:32.346 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:32.346 "strip_size_kb": 64, 00:09:32.346 "state": "online", 00:09:32.346 "raid_level": "raid0", 00:09:32.346 "superblock": true, 00:09:32.347 "num_base_bdevs": 3, 00:09:32.347 "num_base_bdevs_discovered": 3, 00:09:32.347 "num_base_bdevs_operational": 3, 00:09:32.347 "base_bdevs_list": [ 00:09:32.347 { 00:09:32.347 "name": "pt1", 00:09:32.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.347 "is_configured": true, 00:09:32.347 "data_offset": 2048, 00:09:32.347 "data_size": 63488 00:09:32.347 }, 00:09:32.347 { 00:09:32.347 "name": "pt2", 00:09:32.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.347 "is_configured": true, 00:09:32.347 "data_offset": 2048, 00:09:32.347 "data_size": 63488 00:09:32.347 }, 00:09:32.347 { 00:09:32.347 "name": "pt3", 00:09:32.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.347 "is_configured": true, 00:09:32.347 "data_offset": 2048, 00:09:32.347 "data_size": 63488 00:09:32.347 } 00:09:32.347 ] 00:09:32.347 }' 00:09:32.347 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.347 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 [2024-11-20 10:32:36.164637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.917 "name": "raid_bdev1", 00:09:32.917 "aliases": [ 00:09:32.917 "163da87c-50f8-4bd3-90ee-61d6448c75e7" 00:09:32.917 ], 00:09:32.917 "product_name": "Raid Volume", 00:09:32.917 "block_size": 512, 00:09:32.917 "num_blocks": 190464, 00:09:32.917 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:32.917 "assigned_rate_limits": { 00:09:32.917 "rw_ios_per_sec": 0, 00:09:32.917 "rw_mbytes_per_sec": 0, 00:09:32.917 "r_mbytes_per_sec": 0, 00:09:32.917 "w_mbytes_per_sec": 0 00:09:32.917 }, 00:09:32.917 "claimed": false, 00:09:32.917 "zoned": false, 00:09:32.917 "supported_io_types": { 00:09:32.917 "read": true, 00:09:32.917 "write": true, 00:09:32.917 "unmap": true, 00:09:32.917 "flush": true, 00:09:32.917 "reset": true, 00:09:32.917 "nvme_admin": false, 00:09:32.917 "nvme_io": false, 00:09:32.917 "nvme_io_md": false, 00:09:32.917 "write_zeroes": true, 00:09:32.917 "zcopy": false, 00:09:32.917 "get_zone_info": false, 00:09:32.917 "zone_management": false, 00:09:32.917 "zone_append": false, 00:09:32.917 "compare": false, 00:09:32.917 "compare_and_write": false, 00:09:32.917 "abort": false, 00:09:32.917 "seek_hole": false, 00:09:32.917 "seek_data": false, 00:09:32.917 "copy": false, 00:09:32.917 "nvme_iov_md": false 00:09:32.917 }, 00:09:32.917 "memory_domains": [ 00:09:32.917 { 00:09:32.917 "dma_device_id": "system", 00:09:32.917 "dma_device_type": 1 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.917 "dma_device_type": 2 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "system", 00:09:32.917 "dma_device_type": 1 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.917 "dma_device_type": 2 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "system", 00:09:32.917 "dma_device_type": 1 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.917 "dma_device_type": 2 00:09:32.918 } 00:09:32.918 ], 00:09:32.918 "driver_specific": { 00:09:32.918 "raid": { 00:09:32.918 "uuid": "163da87c-50f8-4bd3-90ee-61d6448c75e7", 00:09:32.918 "strip_size_kb": 64, 00:09:32.918 "state": "online", 00:09:32.918 "raid_level": "raid0", 00:09:32.918 "superblock": true, 00:09:32.918 "num_base_bdevs": 3, 00:09:32.918 "num_base_bdevs_discovered": 3, 00:09:32.918 "num_base_bdevs_operational": 3, 00:09:32.918 "base_bdevs_list": [ 00:09:32.918 { 00:09:32.918 "name": "pt1", 00:09:32.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.918 "is_configured": true, 00:09:32.918 "data_offset": 2048, 00:09:32.918 "data_size": 63488 00:09:32.918 }, 00:09:32.918 { 00:09:32.918 "name": "pt2", 00:09:32.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.918 "is_configured": true, 00:09:32.918 "data_offset": 2048, 00:09:32.918 "data_size": 63488 00:09:32.918 }, 00:09:32.918 { 00:09:32.918 "name": "pt3", 00:09:32.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.918 "is_configured": true, 00:09:32.918 "data_offset": 2048, 00:09:32.918 "data_size": 63488 00:09:32.918 } 00:09:32.918 ] 00:09:32.918 } 00:09:32.918 } 00:09:32.918 }' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.918 pt2 00:09:32.918 pt3' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.918 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.190 [2024-11-20 10:32:36.448169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 163da87c-50f8-4bd3-90ee-61d6448c75e7 '!=' 163da87c-50f8-4bd3-90ee-61d6448c75e7 ']' 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65233 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65233 ']' 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65233 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65233 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65233' 00:09:33.190 killing process with pid 65233 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65233 00:09:33.190 [2024-11-20 10:32:36.529683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.190 10:32:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65233 00:09:33.190 [2024-11-20 10:32:36.529868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.190 [2024-11-20 10:32:36.529939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.190 [2024-11-20 10:32:36.529957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:33.448 [2024-11-20 10:32:36.854083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.822 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:34.822 00:09:34.822 real 0m5.453s 00:09:34.822 user 0m7.863s 00:09:34.822 sys 0m0.951s 00:09:34.822 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.822 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.822 ************************************ 00:09:34.822 END TEST raid_superblock_test 00:09:34.822 ************************************ 00:09:34.822 10:32:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:34.822 10:32:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:34.822 10:32:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.822 10:32:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.822 ************************************ 00:09:34.822 START TEST raid_read_error_test 00:09:34.822 ************************************ 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LwRr3ni2kb 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65492 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65492 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65492 ']' 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.822 10:32:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.822 [2024-11-20 10:32:38.134809] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:34.822 [2024-11-20 10:32:38.135015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65492 ] 00:09:35.080 [2024-11-20 10:32:38.310511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.080 [2024-11-20 10:32:38.430511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.337 [2024-11-20 10:32:38.642670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.337 [2024-11-20 10:32:38.642805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.594 BaseBdev1_malloc 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.594 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 true 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 [2024-11-20 10:32:39.076851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:35.853 [2024-11-20 10:32:39.076917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.853 [2024-11-20 10:32:39.076950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:35.853 [2024-11-20 10:32:39.076961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.853 [2024-11-20 10:32:39.079266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.853 [2024-11-20 10:32:39.079369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:35.853 BaseBdev1 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 BaseBdev2_malloc 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 true 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 [2024-11-20 10:32:39.144590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:35.853 [2024-11-20 10:32:39.144658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.853 [2024-11-20 10:32:39.144679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:35.853 [2024-11-20 10:32:39.144690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.853 [2024-11-20 10:32:39.146904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.853 [2024-11-20 10:32:39.146947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:35.853 BaseBdev2 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 BaseBdev3_malloc 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 true 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 [2024-11-20 10:32:39.226269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:35.853 [2024-11-20 10:32:39.226342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.853 [2024-11-20 10:32:39.226380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:35.853 [2024-11-20 10:32:39.226393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.853 [2024-11-20 10:32:39.228706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.853 [2024-11-20 10:32:39.228750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:35.853 BaseBdev3 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.853 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 [2024-11-20 10:32:39.238373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.853 [2024-11-20 10:32:39.240347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.853 [2024-11-20 10:32:39.240453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.853 [2024-11-20 10:32:39.240681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:35.853 [2024-11-20 10:32:39.240751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.853 [2024-11-20 10:32:39.241078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:35.854 [2024-11-20 10:32:39.241250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:35.854 [2024-11-20 10:32:39.241264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:35.854 [2024-11-20 10:32:39.241467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.854 "name": "raid_bdev1", 00:09:35.854 "uuid": "f0def3dc-2cbd-45bc-8824-9005318f5198", 00:09:35.854 "strip_size_kb": 64, 00:09:35.854 "state": "online", 00:09:35.854 "raid_level": "raid0", 00:09:35.854 "superblock": true, 00:09:35.854 "num_base_bdevs": 3, 00:09:35.854 "num_base_bdevs_discovered": 3, 00:09:35.854 "num_base_bdevs_operational": 3, 00:09:35.854 "base_bdevs_list": [ 00:09:35.854 { 00:09:35.854 "name": "BaseBdev1", 00:09:35.854 "uuid": "2dfc9a45-65c0-528b-baa1-1d6d01bc2e85", 00:09:35.854 "is_configured": true, 00:09:35.854 "data_offset": 2048, 00:09:35.854 "data_size": 63488 00:09:35.854 }, 00:09:35.854 { 00:09:35.854 "name": "BaseBdev2", 00:09:35.854 "uuid": "1c892ead-843b-5ca3-876b-88223f64fdeb", 00:09:35.854 "is_configured": true, 00:09:35.854 "data_offset": 2048, 00:09:35.854 "data_size": 63488 00:09:35.854 }, 00:09:35.854 { 00:09:35.854 "name": "BaseBdev3", 00:09:35.854 "uuid": "382c3c33-f374-5b66-ab6d-3bfea94c2fe1", 00:09:35.854 "is_configured": true, 00:09:35.854 "data_offset": 2048, 00:09:35.854 "data_size": 63488 00:09:35.854 } 00:09:35.854 ] 00:09:35.854 }' 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.854 10:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.422 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:36.422 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:36.422 [2024-11-20 10:32:39.790708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.361 "name": "raid_bdev1", 00:09:37.361 "uuid": "f0def3dc-2cbd-45bc-8824-9005318f5198", 00:09:37.361 "strip_size_kb": 64, 00:09:37.361 "state": "online", 00:09:37.361 "raid_level": "raid0", 00:09:37.361 "superblock": true, 00:09:37.361 "num_base_bdevs": 3, 00:09:37.361 "num_base_bdevs_discovered": 3, 00:09:37.361 "num_base_bdevs_operational": 3, 00:09:37.361 "base_bdevs_list": [ 00:09:37.361 { 00:09:37.361 "name": "BaseBdev1", 00:09:37.361 "uuid": "2dfc9a45-65c0-528b-baa1-1d6d01bc2e85", 00:09:37.361 "is_configured": true, 00:09:37.361 "data_offset": 2048, 00:09:37.361 "data_size": 63488 00:09:37.361 }, 00:09:37.361 { 00:09:37.361 "name": "BaseBdev2", 00:09:37.361 "uuid": "1c892ead-843b-5ca3-876b-88223f64fdeb", 00:09:37.361 "is_configured": true, 00:09:37.361 "data_offset": 2048, 00:09:37.361 "data_size": 63488 00:09:37.361 }, 00:09:37.361 { 00:09:37.361 "name": "BaseBdev3", 00:09:37.361 "uuid": "382c3c33-f374-5b66-ab6d-3bfea94c2fe1", 00:09:37.361 "is_configured": true, 00:09:37.361 "data_offset": 2048, 00:09:37.361 "data_size": 63488 00:09:37.361 } 00:09:37.361 ] 00:09:37.361 }' 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.361 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.930 [2024-11-20 10:32:41.111479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.930 [2024-11-20 10:32:41.111511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.930 [2024-11-20 10:32:41.114362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.930 [2024-11-20 10:32:41.114407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.930 [2024-11-20 10:32:41.114445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.930 [2024-11-20 10:32:41.114454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:37.930 { 00:09:37.930 "results": [ 00:09:37.930 { 00:09:37.930 "job": "raid_bdev1", 00:09:37.930 "core_mask": "0x1", 00:09:37.930 "workload": "randrw", 00:09:37.930 "percentage": 50, 00:09:37.930 "status": "finished", 00:09:37.930 "queue_depth": 1, 00:09:37.930 "io_size": 131072, 00:09:37.930 "runtime": 1.321221, 00:09:37.930 "iops": 14732.584480567597, 00:09:37.930 "mibps": 1841.5730600709496, 00:09:37.930 "io_failed": 1, 00:09:37.930 "io_timeout": 0, 00:09:37.930 "avg_latency_us": 94.37039343484126, 00:09:37.930 "min_latency_us": 27.053275109170304, 00:09:37.930 "max_latency_us": 1731.4096069868995 00:09:37.930 } 00:09:37.930 ], 00:09:37.930 "core_count": 1 00:09:37.930 } 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65492 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65492 ']' 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65492 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65492 00:09:37.930 killing process with pid 65492 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65492' 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65492 00:09:37.930 [2024-11-20 10:32:41.158385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.930 10:32:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65492 00:09:37.930 [2024-11-20 10:32:41.396437] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LwRr3ni2kb 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:39.322 ************************************ 00:09:39.322 END TEST raid_read_error_test 00:09:39.322 ************************************ 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:39.322 00:09:39.322 real 0m4.575s 00:09:39.322 user 0m5.463s 00:09:39.322 sys 0m0.550s 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.322 10:32:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.322 10:32:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:39.322 10:32:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:39.322 10:32:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.322 10:32:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.322 ************************************ 00:09:39.322 START TEST raid_write_error_test 00:09:39.322 ************************************ 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wY9Iq4QiNI 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65637 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65637 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65637 ']' 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.322 10:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:39.322 [2024-11-20 10:32:42.755166] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:39.322 [2024-11-20 10:32:42.755290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65637 ] 00:09:39.582 [2024-11-20 10:32:42.930510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.582 [2024-11-20 10:32:43.054247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.841 [2024-11-20 10:32:43.255948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.841 [2024-11-20 10:32:43.256030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.411 BaseBdev1_malloc 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.411 true 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.411 [2024-11-20 10:32:43.686005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:40.411 [2024-11-20 10:32:43.686145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.411 [2024-11-20 10:32:43.686213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:40.411 [2024-11-20 10:32:43.686262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.411 [2024-11-20 10:32:43.688833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.411 [2024-11-20 10:32:43.688941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:40.411 BaseBdev1 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.411 BaseBdev2_malloc 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.411 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 true 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 [2024-11-20 10:32:43.757809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.412 [2024-11-20 10:32:43.757872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.412 [2024-11-20 10:32:43.757893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.412 [2024-11-20 10:32:43.757907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.412 [2024-11-20 10:32:43.760337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.412 [2024-11-20 10:32:43.760389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.412 BaseBdev2 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 BaseBdev3_malloc 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 true 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 [2024-11-20 10:32:43.840751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:40.412 [2024-11-20 10:32:43.840811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.412 [2024-11-20 10:32:43.840832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:40.412 [2024-11-20 10:32:43.840845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.412 [2024-11-20 10:32:43.843276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.412 [2024-11-20 10:32:43.843369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:40.412 BaseBdev3 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 [2024-11-20 10:32:43.852808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.412 [2024-11-20 10:32:43.854891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.412 [2024-11-20 10:32:43.854984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.412 [2024-11-20 10:32:43.855211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.412 [2024-11-20 10:32:43.855227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.412 [2024-11-20 10:32:43.855550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:40.412 [2024-11-20 10:32:43.855740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.412 [2024-11-20 10:32:43.855779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:40.412 [2024-11-20 10:32:43.855944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.412 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.671 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.671 "name": "raid_bdev1", 00:09:40.671 "uuid": "07cd753b-92bc-428f-915d-45faa34c4177", 00:09:40.671 "strip_size_kb": 64, 00:09:40.671 "state": "online", 00:09:40.671 "raid_level": "raid0", 00:09:40.671 "superblock": true, 00:09:40.671 "num_base_bdevs": 3, 00:09:40.671 "num_base_bdevs_discovered": 3, 00:09:40.671 "num_base_bdevs_operational": 3, 00:09:40.671 "base_bdevs_list": [ 00:09:40.671 { 00:09:40.671 "name": "BaseBdev1", 00:09:40.671 "uuid": "20f192ba-7e7c-551a-820c-8c46f50c783d", 00:09:40.671 "is_configured": true, 00:09:40.671 "data_offset": 2048, 00:09:40.671 "data_size": 63488 00:09:40.671 }, 00:09:40.671 { 00:09:40.671 "name": "BaseBdev2", 00:09:40.671 "uuid": "854deb91-1673-59ff-9cf8-0c82cd0a3c31", 00:09:40.671 "is_configured": true, 00:09:40.671 "data_offset": 2048, 00:09:40.671 "data_size": 63488 00:09:40.671 }, 00:09:40.671 { 00:09:40.671 "name": "BaseBdev3", 00:09:40.671 "uuid": "bdbe1227-88c0-547e-84ae-18b4485f9be0", 00:09:40.671 "is_configured": true, 00:09:40.671 "data_offset": 2048, 00:09:40.671 "data_size": 63488 00:09:40.671 } 00:09:40.671 ] 00:09:40.671 }' 00:09:40.671 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.671 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.929 10:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:40.929 10:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:41.187 [2024-11-20 10:32:44.445450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.122 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.123 "name": "raid_bdev1", 00:09:42.123 "uuid": "07cd753b-92bc-428f-915d-45faa34c4177", 00:09:42.123 "strip_size_kb": 64, 00:09:42.123 "state": "online", 00:09:42.123 "raid_level": "raid0", 00:09:42.123 "superblock": true, 00:09:42.123 "num_base_bdevs": 3, 00:09:42.123 "num_base_bdevs_discovered": 3, 00:09:42.123 "num_base_bdevs_operational": 3, 00:09:42.123 "base_bdevs_list": [ 00:09:42.123 { 00:09:42.123 "name": "BaseBdev1", 00:09:42.123 "uuid": "20f192ba-7e7c-551a-820c-8c46f50c783d", 00:09:42.123 "is_configured": true, 00:09:42.123 "data_offset": 2048, 00:09:42.123 "data_size": 63488 00:09:42.123 }, 00:09:42.123 { 00:09:42.123 "name": "BaseBdev2", 00:09:42.123 "uuid": "854deb91-1673-59ff-9cf8-0c82cd0a3c31", 00:09:42.123 "is_configured": true, 00:09:42.123 "data_offset": 2048, 00:09:42.123 "data_size": 63488 00:09:42.123 }, 00:09:42.123 { 00:09:42.123 "name": "BaseBdev3", 00:09:42.123 "uuid": "bdbe1227-88c0-547e-84ae-18b4485f9be0", 00:09:42.123 "is_configured": true, 00:09:42.123 "data_offset": 2048, 00:09:42.123 "data_size": 63488 00:09:42.123 } 00:09:42.123 ] 00:09:42.123 }' 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.123 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.690 [2024-11-20 10:32:45.871295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.690 [2024-11-20 10:32:45.871330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.690 [2024-11-20 10:32:45.874558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.690 [2024-11-20 10:32:45.874611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.690 [2024-11-20 10:32:45.874654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.690 [2024-11-20 10:32:45.874665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:42.690 { 00:09:42.690 "results": [ 00:09:42.690 { 00:09:42.690 "job": "raid_bdev1", 00:09:42.690 "core_mask": "0x1", 00:09:42.690 "workload": "randrw", 00:09:42.690 "percentage": 50, 00:09:42.690 "status": "finished", 00:09:42.690 "queue_depth": 1, 00:09:42.690 "io_size": 131072, 00:09:42.690 "runtime": 1.426479, 00:09:42.690 "iops": 13433.07542557584, 00:09:42.690 "mibps": 1679.13442819698, 00:09:42.690 "io_failed": 1, 00:09:42.690 "io_timeout": 0, 00:09:42.690 "avg_latency_us": 103.26374894122522, 00:09:42.690 "min_latency_us": 20.56943231441048, 00:09:42.690 "max_latency_us": 1738.564192139738 00:09:42.690 } 00:09:42.690 ], 00:09:42.690 "core_count": 1 00:09:42.690 } 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65637 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65637 ']' 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65637 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:42.690 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.691 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65637 00:09:42.691 killing process with pid 65637 00:09:42.691 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.691 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.691 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65637' 00:09:42.691 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65637 00:09:42.691 [2024-11-20 10:32:45.914565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.691 10:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65637 00:09:42.949 [2024-11-20 10:32:46.198899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wY9Iq4QiNI 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:44.327 ************************************ 00:09:44.327 END TEST raid_write_error_test 00:09:44.327 ************************************ 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:44.327 00:09:44.327 real 0m4.957s 00:09:44.327 user 0m5.944s 00:09:44.327 sys 0m0.586s 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.327 10:32:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.327 10:32:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:44.327 10:32:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:44.327 10:32:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.327 10:32:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.327 10:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.327 ************************************ 00:09:44.327 START TEST raid_state_function_test 00:09:44.328 ************************************ 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65781 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65781' 00:09:44.328 Process raid pid: 65781 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65781 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65781 ']' 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.328 10:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.328 [2024-11-20 10:32:47.790100] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:44.328 [2024-11-20 10:32:47.790226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.587 [2024-11-20 10:32:47.953320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.846 [2024-11-20 10:32:48.089639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.105 [2024-11-20 10:32:48.328330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.105 [2024-11-20 10:32:48.328397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.392 [2024-11-20 10:32:48.713990] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.392 [2024-11-20 10:32:48.714054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.392 [2024-11-20 10:32:48.714066] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.392 [2024-11-20 10:32:48.714077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.392 [2024-11-20 10:32:48.714084] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.392 [2024-11-20 10:32:48.714093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.392 "name": "Existed_Raid", 00:09:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.392 "strip_size_kb": 64, 00:09:45.392 "state": "configuring", 00:09:45.392 "raid_level": "concat", 00:09:45.392 "superblock": false, 00:09:45.392 "num_base_bdevs": 3, 00:09:45.392 "num_base_bdevs_discovered": 0, 00:09:45.392 "num_base_bdevs_operational": 3, 00:09:45.392 "base_bdevs_list": [ 00:09:45.392 { 00:09:45.392 "name": "BaseBdev1", 00:09:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.392 "is_configured": false, 00:09:45.392 "data_offset": 0, 00:09:45.392 "data_size": 0 00:09:45.392 }, 00:09:45.392 { 00:09:45.392 "name": "BaseBdev2", 00:09:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.392 "is_configured": false, 00:09:45.392 "data_offset": 0, 00:09:45.392 "data_size": 0 00:09:45.392 }, 00:09:45.392 { 00:09:45.392 "name": "BaseBdev3", 00:09:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.392 "is_configured": false, 00:09:45.392 "data_offset": 0, 00:09:45.392 "data_size": 0 00:09:45.392 } 00:09:45.392 ] 00:09:45.392 }' 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.392 10:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.967 [2024-11-20 10:32:49.185136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.967 [2024-11-20 10:32:49.185237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.967 [2024-11-20 10:32:49.193126] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.967 [2024-11-20 10:32:49.193226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.967 [2024-11-20 10:32:49.193258] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.967 [2024-11-20 10:32:49.193284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.967 [2024-11-20 10:32:49.193306] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.967 [2024-11-20 10:32:49.193330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.967 [2024-11-20 10:32:49.236950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.967 BaseBdev1 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.967 [ 00:09:45.967 { 00:09:45.967 "name": "BaseBdev1", 00:09:45.967 "aliases": [ 00:09:45.967 "87476caa-3828-4777-ab8d-a28f4112c14a" 00:09:45.967 ], 00:09:45.967 "product_name": "Malloc disk", 00:09:45.967 "block_size": 512, 00:09:45.967 "num_blocks": 65536, 00:09:45.967 "uuid": "87476caa-3828-4777-ab8d-a28f4112c14a", 00:09:45.967 "assigned_rate_limits": { 00:09:45.967 "rw_ios_per_sec": 0, 00:09:45.967 "rw_mbytes_per_sec": 0, 00:09:45.967 "r_mbytes_per_sec": 0, 00:09:45.967 "w_mbytes_per_sec": 0 00:09:45.967 }, 00:09:45.967 "claimed": true, 00:09:45.967 "claim_type": "exclusive_write", 00:09:45.967 "zoned": false, 00:09:45.967 "supported_io_types": { 00:09:45.967 "read": true, 00:09:45.967 "write": true, 00:09:45.967 "unmap": true, 00:09:45.967 "flush": true, 00:09:45.967 "reset": true, 00:09:45.967 "nvme_admin": false, 00:09:45.967 "nvme_io": false, 00:09:45.967 "nvme_io_md": false, 00:09:45.967 "write_zeroes": true, 00:09:45.967 "zcopy": true, 00:09:45.967 "get_zone_info": false, 00:09:45.967 "zone_management": false, 00:09:45.967 "zone_append": false, 00:09:45.967 "compare": false, 00:09:45.967 "compare_and_write": false, 00:09:45.967 "abort": true, 00:09:45.967 "seek_hole": false, 00:09:45.967 "seek_data": false, 00:09:45.967 "copy": true, 00:09:45.967 "nvme_iov_md": false 00:09:45.967 }, 00:09:45.967 "memory_domains": [ 00:09:45.967 { 00:09:45.967 "dma_device_id": "system", 00:09:45.967 "dma_device_type": 1 00:09:45.967 }, 00:09:45.967 { 00:09:45.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.967 "dma_device_type": 2 00:09:45.967 } 00:09:45.967 ], 00:09:45.967 "driver_specific": {} 00:09:45.967 } 00:09:45.967 ] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.967 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.968 "name": "Existed_Raid", 00:09:45.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.968 "strip_size_kb": 64, 00:09:45.968 "state": "configuring", 00:09:45.968 "raid_level": "concat", 00:09:45.968 "superblock": false, 00:09:45.968 "num_base_bdevs": 3, 00:09:45.968 "num_base_bdevs_discovered": 1, 00:09:45.968 "num_base_bdevs_operational": 3, 00:09:45.968 "base_bdevs_list": [ 00:09:45.968 { 00:09:45.968 "name": "BaseBdev1", 00:09:45.968 "uuid": "87476caa-3828-4777-ab8d-a28f4112c14a", 00:09:45.968 "is_configured": true, 00:09:45.968 "data_offset": 0, 00:09:45.968 "data_size": 65536 00:09:45.968 }, 00:09:45.968 { 00:09:45.968 "name": "BaseBdev2", 00:09:45.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.968 "is_configured": false, 00:09:45.968 "data_offset": 0, 00:09:45.968 "data_size": 0 00:09:45.968 }, 00:09:45.968 { 00:09:45.968 "name": "BaseBdev3", 00:09:45.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.968 "is_configured": false, 00:09:45.968 "data_offset": 0, 00:09:45.968 "data_size": 0 00:09:45.968 } 00:09:45.968 ] 00:09:45.968 }' 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.968 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.537 [2024-11-20 10:32:49.756124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.537 [2024-11-20 10:32:49.756185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.537 [2024-11-20 10:32:49.768157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.537 [2024-11-20 10:32:49.770170] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.537 [2024-11-20 10:32:49.770234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.537 [2024-11-20 10:32:49.770245] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.537 [2024-11-20 10:32:49.770256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.537 "name": "Existed_Raid", 00:09:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.537 "strip_size_kb": 64, 00:09:46.537 "state": "configuring", 00:09:46.537 "raid_level": "concat", 00:09:46.537 "superblock": false, 00:09:46.537 "num_base_bdevs": 3, 00:09:46.537 "num_base_bdevs_discovered": 1, 00:09:46.537 "num_base_bdevs_operational": 3, 00:09:46.537 "base_bdevs_list": [ 00:09:46.537 { 00:09:46.537 "name": "BaseBdev1", 00:09:46.537 "uuid": "87476caa-3828-4777-ab8d-a28f4112c14a", 00:09:46.537 "is_configured": true, 00:09:46.537 "data_offset": 0, 00:09:46.537 "data_size": 65536 00:09:46.537 }, 00:09:46.537 { 00:09:46.537 "name": "BaseBdev2", 00:09:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.537 "is_configured": false, 00:09:46.537 "data_offset": 0, 00:09:46.537 "data_size": 0 00:09:46.537 }, 00:09:46.537 { 00:09:46.537 "name": "BaseBdev3", 00:09:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.537 "is_configured": false, 00:09:46.537 "data_offset": 0, 00:09:46.537 "data_size": 0 00:09:46.537 } 00:09:46.537 ] 00:09:46.537 }' 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.537 10:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.863 [2024-11-20 10:32:50.244164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.863 BaseBdev2 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.863 [ 00:09:46.863 { 00:09:46.863 "name": "BaseBdev2", 00:09:46.863 "aliases": [ 00:09:46.863 "aa383165-72d9-4c1b-9053-be93c153f541" 00:09:46.863 ], 00:09:46.863 "product_name": "Malloc disk", 00:09:46.863 "block_size": 512, 00:09:46.863 "num_blocks": 65536, 00:09:46.863 "uuid": "aa383165-72d9-4c1b-9053-be93c153f541", 00:09:46.863 "assigned_rate_limits": { 00:09:46.863 "rw_ios_per_sec": 0, 00:09:46.863 "rw_mbytes_per_sec": 0, 00:09:46.863 "r_mbytes_per_sec": 0, 00:09:46.863 "w_mbytes_per_sec": 0 00:09:46.863 }, 00:09:46.863 "claimed": true, 00:09:46.863 "claim_type": "exclusive_write", 00:09:46.863 "zoned": false, 00:09:46.863 "supported_io_types": { 00:09:46.863 "read": true, 00:09:46.863 "write": true, 00:09:46.863 "unmap": true, 00:09:46.863 "flush": true, 00:09:46.863 "reset": true, 00:09:46.863 "nvme_admin": false, 00:09:46.863 "nvme_io": false, 00:09:46.863 "nvme_io_md": false, 00:09:46.863 "write_zeroes": true, 00:09:46.863 "zcopy": true, 00:09:46.863 "get_zone_info": false, 00:09:46.863 "zone_management": false, 00:09:46.863 "zone_append": false, 00:09:46.863 "compare": false, 00:09:46.863 "compare_and_write": false, 00:09:46.863 "abort": true, 00:09:46.863 "seek_hole": false, 00:09:46.863 "seek_data": false, 00:09:46.863 "copy": true, 00:09:46.863 "nvme_iov_md": false 00:09:46.863 }, 00:09:46.863 "memory_domains": [ 00:09:46.863 { 00:09:46.863 "dma_device_id": "system", 00:09:46.863 "dma_device_type": 1 00:09:46.863 }, 00:09:46.863 { 00:09:46.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.863 "dma_device_type": 2 00:09:46.863 } 00:09:46.863 ], 00:09:46.863 "driver_specific": {} 00:09:46.863 } 00:09:46.863 ] 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.863 "name": "Existed_Raid", 00:09:46.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.863 "strip_size_kb": 64, 00:09:46.863 "state": "configuring", 00:09:46.863 "raid_level": "concat", 00:09:46.863 "superblock": false, 00:09:46.863 "num_base_bdevs": 3, 00:09:46.863 "num_base_bdevs_discovered": 2, 00:09:46.863 "num_base_bdevs_operational": 3, 00:09:46.863 "base_bdevs_list": [ 00:09:46.863 { 00:09:46.863 "name": "BaseBdev1", 00:09:46.863 "uuid": "87476caa-3828-4777-ab8d-a28f4112c14a", 00:09:46.863 "is_configured": true, 00:09:46.863 "data_offset": 0, 00:09:46.863 "data_size": 65536 00:09:46.863 }, 00:09:46.863 { 00:09:46.863 "name": "BaseBdev2", 00:09:46.863 "uuid": "aa383165-72d9-4c1b-9053-be93c153f541", 00:09:46.863 "is_configured": true, 00:09:46.863 "data_offset": 0, 00:09:46.863 "data_size": 65536 00:09:46.863 }, 00:09:46.863 { 00:09:46.863 "name": "BaseBdev3", 00:09:46.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.863 "is_configured": false, 00:09:46.863 "data_offset": 0, 00:09:46.863 "data_size": 0 00:09:46.863 } 00:09:46.863 ] 00:09:46.863 }' 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.863 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [2024-11-20 10:32:50.770855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.435 [2024-11-20 10:32:50.770911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.435 [2024-11-20 10:32:50.770926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:47.435 [2024-11-20 10:32:50.771236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.435 [2024-11-20 10:32:50.771455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.435 [2024-11-20 10:32:50.771469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:47.435 [2024-11-20 10:32:50.771795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.435 BaseBdev3 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [ 00:09:47.435 { 00:09:47.435 "name": "BaseBdev3", 00:09:47.435 "aliases": [ 00:09:47.435 "747b0784-a75e-4707-8ef4-a07b4016866e" 00:09:47.435 ], 00:09:47.435 "product_name": "Malloc disk", 00:09:47.435 "block_size": 512, 00:09:47.435 "num_blocks": 65536, 00:09:47.435 "uuid": "747b0784-a75e-4707-8ef4-a07b4016866e", 00:09:47.435 "assigned_rate_limits": { 00:09:47.435 "rw_ios_per_sec": 0, 00:09:47.435 "rw_mbytes_per_sec": 0, 00:09:47.435 "r_mbytes_per_sec": 0, 00:09:47.435 "w_mbytes_per_sec": 0 00:09:47.435 }, 00:09:47.435 "claimed": true, 00:09:47.435 "claim_type": "exclusive_write", 00:09:47.435 "zoned": false, 00:09:47.435 "supported_io_types": { 00:09:47.435 "read": true, 00:09:47.435 "write": true, 00:09:47.435 "unmap": true, 00:09:47.435 "flush": true, 00:09:47.435 "reset": true, 00:09:47.435 "nvme_admin": false, 00:09:47.435 "nvme_io": false, 00:09:47.435 "nvme_io_md": false, 00:09:47.435 "write_zeroes": true, 00:09:47.435 "zcopy": true, 00:09:47.435 "get_zone_info": false, 00:09:47.435 "zone_management": false, 00:09:47.435 "zone_append": false, 00:09:47.435 "compare": false, 00:09:47.435 "compare_and_write": false, 00:09:47.435 "abort": true, 00:09:47.435 "seek_hole": false, 00:09:47.435 "seek_data": false, 00:09:47.435 "copy": true, 00:09:47.435 "nvme_iov_md": false 00:09:47.435 }, 00:09:47.435 "memory_domains": [ 00:09:47.435 { 00:09:47.435 "dma_device_id": "system", 00:09:47.435 "dma_device_type": 1 00:09:47.435 }, 00:09:47.435 { 00:09:47.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.435 "dma_device_type": 2 00:09:47.435 } 00:09:47.435 ], 00:09:47.435 "driver_specific": {} 00:09:47.435 } 00:09:47.435 ] 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.436 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.436 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.436 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.436 "name": "Existed_Raid", 00:09:47.436 "uuid": "44de3d08-f01d-4955-966d-9171acb12c40", 00:09:47.436 "strip_size_kb": 64, 00:09:47.436 "state": "online", 00:09:47.436 "raid_level": "concat", 00:09:47.436 "superblock": false, 00:09:47.436 "num_base_bdevs": 3, 00:09:47.436 "num_base_bdevs_discovered": 3, 00:09:47.436 "num_base_bdevs_operational": 3, 00:09:47.436 "base_bdevs_list": [ 00:09:47.436 { 00:09:47.436 "name": "BaseBdev1", 00:09:47.436 "uuid": "87476caa-3828-4777-ab8d-a28f4112c14a", 00:09:47.436 "is_configured": true, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 65536 00:09:47.436 }, 00:09:47.436 { 00:09:47.436 "name": "BaseBdev2", 00:09:47.436 "uuid": "aa383165-72d9-4c1b-9053-be93c153f541", 00:09:47.436 "is_configured": true, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 65536 00:09:47.436 }, 00:09:47.436 { 00:09:47.436 "name": "BaseBdev3", 00:09:47.436 "uuid": "747b0784-a75e-4707-8ef4-a07b4016866e", 00:09:47.436 "is_configured": true, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 65536 00:09:47.436 } 00:09:47.436 ] 00:09:47.436 }' 00:09:47.436 10:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.436 10:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.005 [2024-11-20 10:32:51.282500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.005 "name": "Existed_Raid", 00:09:48.005 "aliases": [ 00:09:48.005 "44de3d08-f01d-4955-966d-9171acb12c40" 00:09:48.005 ], 00:09:48.005 "product_name": "Raid Volume", 00:09:48.005 "block_size": 512, 00:09:48.005 "num_blocks": 196608, 00:09:48.005 "uuid": "44de3d08-f01d-4955-966d-9171acb12c40", 00:09:48.005 "assigned_rate_limits": { 00:09:48.005 "rw_ios_per_sec": 0, 00:09:48.005 "rw_mbytes_per_sec": 0, 00:09:48.005 "r_mbytes_per_sec": 0, 00:09:48.005 "w_mbytes_per_sec": 0 00:09:48.005 }, 00:09:48.005 "claimed": false, 00:09:48.005 "zoned": false, 00:09:48.005 "supported_io_types": { 00:09:48.005 "read": true, 00:09:48.005 "write": true, 00:09:48.005 "unmap": true, 00:09:48.005 "flush": true, 00:09:48.005 "reset": true, 00:09:48.005 "nvme_admin": false, 00:09:48.005 "nvme_io": false, 00:09:48.005 "nvme_io_md": false, 00:09:48.005 "write_zeroes": true, 00:09:48.005 "zcopy": false, 00:09:48.005 "get_zone_info": false, 00:09:48.005 "zone_management": false, 00:09:48.005 "zone_append": false, 00:09:48.005 "compare": false, 00:09:48.005 "compare_and_write": false, 00:09:48.005 "abort": false, 00:09:48.005 "seek_hole": false, 00:09:48.005 "seek_data": false, 00:09:48.005 "copy": false, 00:09:48.005 "nvme_iov_md": false 00:09:48.005 }, 00:09:48.005 "memory_domains": [ 00:09:48.005 { 00:09:48.005 "dma_device_id": "system", 00:09:48.005 "dma_device_type": 1 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.005 "dma_device_type": 2 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "system", 00:09:48.005 "dma_device_type": 1 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.005 "dma_device_type": 2 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "system", 00:09:48.005 "dma_device_type": 1 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.005 "dma_device_type": 2 00:09:48.005 } 00:09:48.005 ], 00:09:48.005 "driver_specific": { 00:09:48.005 "raid": { 00:09:48.005 "uuid": "44de3d08-f01d-4955-966d-9171acb12c40", 00:09:48.005 "strip_size_kb": 64, 00:09:48.005 "state": "online", 00:09:48.005 "raid_level": "concat", 00:09:48.005 "superblock": false, 00:09:48.005 "num_base_bdevs": 3, 00:09:48.005 "num_base_bdevs_discovered": 3, 00:09:48.005 "num_base_bdevs_operational": 3, 00:09:48.005 "base_bdevs_list": [ 00:09:48.005 { 00:09:48.005 "name": "BaseBdev1", 00:09:48.005 "uuid": "87476caa-3828-4777-ab8d-a28f4112c14a", 00:09:48.005 "is_configured": true, 00:09:48.005 "data_offset": 0, 00:09:48.005 "data_size": 65536 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "name": "BaseBdev2", 00:09:48.005 "uuid": "aa383165-72d9-4c1b-9053-be93c153f541", 00:09:48.005 "is_configured": true, 00:09:48.005 "data_offset": 0, 00:09:48.005 "data_size": 65536 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "name": "BaseBdev3", 00:09:48.005 "uuid": "747b0784-a75e-4707-8ef4-a07b4016866e", 00:09:48.005 "is_configured": true, 00:09:48.005 "data_offset": 0, 00:09:48.005 "data_size": 65536 00:09:48.005 } 00:09:48.005 ] 00:09:48.005 } 00:09:48.005 } 00:09:48.005 }' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:48.005 BaseBdev2 00:09:48.005 BaseBdev3' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.005 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 [2024-11-20 10:32:51.561715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.265 [2024-11-20 10:32:51.561813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.265 [2024-11-20 10:32:51.561905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.265 "name": "Existed_Raid", 00:09:48.265 "uuid": "44de3d08-f01d-4955-966d-9171acb12c40", 00:09:48.265 "strip_size_kb": 64, 00:09:48.265 "state": "offline", 00:09:48.266 "raid_level": "concat", 00:09:48.266 "superblock": false, 00:09:48.266 "num_base_bdevs": 3, 00:09:48.266 "num_base_bdevs_discovered": 2, 00:09:48.266 "num_base_bdevs_operational": 2, 00:09:48.266 "base_bdevs_list": [ 00:09:48.266 { 00:09:48.266 "name": null, 00:09:48.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.266 "is_configured": false, 00:09:48.266 "data_offset": 0, 00:09:48.266 "data_size": 65536 00:09:48.266 }, 00:09:48.266 { 00:09:48.266 "name": "BaseBdev2", 00:09:48.266 "uuid": "aa383165-72d9-4c1b-9053-be93c153f541", 00:09:48.266 "is_configured": true, 00:09:48.266 "data_offset": 0, 00:09:48.266 "data_size": 65536 00:09:48.266 }, 00:09:48.266 { 00:09:48.266 "name": "BaseBdev3", 00:09:48.266 "uuid": "747b0784-a75e-4707-8ef4-a07b4016866e", 00:09:48.266 "is_configured": true, 00:09:48.266 "data_offset": 0, 00:09:48.266 "data_size": 65536 00:09:48.266 } 00:09:48.266 ] 00:09:48.266 }' 00:09:48.266 10:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.266 10:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.834 [2024-11-20 10:32:52.171774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.834 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 [2024-11-20 10:32:52.348614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.093 [2024-11-20 10:32:52.348730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 BaseBdev2 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.352 [ 00:09:49.352 { 00:09:49.352 "name": "BaseBdev2", 00:09:49.352 "aliases": [ 00:09:49.352 "527eeda9-13f0-44dd-9e59-270a887b222b" 00:09:49.352 ], 00:09:49.352 "product_name": "Malloc disk", 00:09:49.352 "block_size": 512, 00:09:49.352 "num_blocks": 65536, 00:09:49.352 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:49.352 "assigned_rate_limits": { 00:09:49.352 "rw_ios_per_sec": 0, 00:09:49.352 "rw_mbytes_per_sec": 0, 00:09:49.352 "r_mbytes_per_sec": 0, 00:09:49.352 "w_mbytes_per_sec": 0 00:09:49.352 }, 00:09:49.352 "claimed": false, 00:09:49.352 "zoned": false, 00:09:49.352 "supported_io_types": { 00:09:49.352 "read": true, 00:09:49.352 "write": true, 00:09:49.352 "unmap": true, 00:09:49.352 "flush": true, 00:09:49.352 "reset": true, 00:09:49.352 "nvme_admin": false, 00:09:49.352 "nvme_io": false, 00:09:49.352 "nvme_io_md": false, 00:09:49.352 "write_zeroes": true, 00:09:49.352 "zcopy": true, 00:09:49.352 "get_zone_info": false, 00:09:49.352 "zone_management": false, 00:09:49.352 "zone_append": false, 00:09:49.352 "compare": false, 00:09:49.352 "compare_and_write": false, 00:09:49.352 "abort": true, 00:09:49.352 "seek_hole": false, 00:09:49.352 "seek_data": false, 00:09:49.352 "copy": true, 00:09:49.352 "nvme_iov_md": false 00:09:49.352 }, 00:09:49.352 "memory_domains": [ 00:09:49.352 { 00:09:49.352 "dma_device_id": "system", 00:09:49.352 "dma_device_type": 1 00:09:49.352 }, 00:09:49.352 { 00:09:49.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.352 "dma_device_type": 2 00:09:49.352 } 00:09:49.352 ], 00:09:49.352 "driver_specific": {} 00:09:49.352 } 00:09:49.352 ] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.352 BaseBdev3 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.352 [ 00:09:49.352 { 00:09:49.352 "name": "BaseBdev3", 00:09:49.352 "aliases": [ 00:09:49.352 "7ed02211-bd58-4441-a32d-2f3c5fd95510" 00:09:49.352 ], 00:09:49.352 "product_name": "Malloc disk", 00:09:49.352 "block_size": 512, 00:09:49.352 "num_blocks": 65536, 00:09:49.352 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:49.352 "assigned_rate_limits": { 00:09:49.352 "rw_ios_per_sec": 0, 00:09:49.352 "rw_mbytes_per_sec": 0, 00:09:49.352 "r_mbytes_per_sec": 0, 00:09:49.352 "w_mbytes_per_sec": 0 00:09:49.352 }, 00:09:49.352 "claimed": false, 00:09:49.352 "zoned": false, 00:09:49.352 "supported_io_types": { 00:09:49.352 "read": true, 00:09:49.352 "write": true, 00:09:49.352 "unmap": true, 00:09:49.352 "flush": true, 00:09:49.352 "reset": true, 00:09:49.352 "nvme_admin": false, 00:09:49.352 "nvme_io": false, 00:09:49.352 "nvme_io_md": false, 00:09:49.352 "write_zeroes": true, 00:09:49.352 "zcopy": true, 00:09:49.352 "get_zone_info": false, 00:09:49.352 "zone_management": false, 00:09:49.352 "zone_append": false, 00:09:49.352 "compare": false, 00:09:49.352 "compare_and_write": false, 00:09:49.352 "abort": true, 00:09:49.352 "seek_hole": false, 00:09:49.352 "seek_data": false, 00:09:49.352 "copy": true, 00:09:49.352 "nvme_iov_md": false 00:09:49.352 }, 00:09:49.352 "memory_domains": [ 00:09:49.352 { 00:09:49.352 "dma_device_id": "system", 00:09:49.352 "dma_device_type": 1 00:09:49.352 }, 00:09:49.352 { 00:09:49.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.352 "dma_device_type": 2 00:09:49.352 } 00:09:49.352 ], 00:09:49.352 "driver_specific": {} 00:09:49.352 } 00:09:49.352 ] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.352 [2024-11-20 10:32:52.686802] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.352 [2024-11-20 10:32:52.686920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.352 [2024-11-20 10:32:52.687009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.352 [2024-11-20 10:32:52.689557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.352 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.353 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.353 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.353 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.353 "name": "Existed_Raid", 00:09:49.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.353 "strip_size_kb": 64, 00:09:49.353 "state": "configuring", 00:09:49.353 "raid_level": "concat", 00:09:49.353 "superblock": false, 00:09:49.353 "num_base_bdevs": 3, 00:09:49.353 "num_base_bdevs_discovered": 2, 00:09:49.353 "num_base_bdevs_operational": 3, 00:09:49.353 "base_bdevs_list": [ 00:09:49.353 { 00:09:49.353 "name": "BaseBdev1", 00:09:49.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.353 "is_configured": false, 00:09:49.353 "data_offset": 0, 00:09:49.353 "data_size": 0 00:09:49.353 }, 00:09:49.353 { 00:09:49.353 "name": "BaseBdev2", 00:09:49.353 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:49.353 "is_configured": true, 00:09:49.353 "data_offset": 0, 00:09:49.353 "data_size": 65536 00:09:49.353 }, 00:09:49.353 { 00:09:49.353 "name": "BaseBdev3", 00:09:49.353 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:49.353 "is_configured": true, 00:09:49.353 "data_offset": 0, 00:09:49.353 "data_size": 65536 00:09:49.353 } 00:09:49.353 ] 00:09:49.353 }' 00:09:49.353 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.353 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 [2024-11-20 10:32:53.169974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.921 "name": "Existed_Raid", 00:09:49.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.921 "strip_size_kb": 64, 00:09:49.921 "state": "configuring", 00:09:49.921 "raid_level": "concat", 00:09:49.921 "superblock": false, 00:09:49.921 "num_base_bdevs": 3, 00:09:49.921 "num_base_bdevs_discovered": 1, 00:09:49.921 "num_base_bdevs_operational": 3, 00:09:49.921 "base_bdevs_list": [ 00:09:49.921 { 00:09:49.921 "name": "BaseBdev1", 00:09:49.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.921 "is_configured": false, 00:09:49.921 "data_offset": 0, 00:09:49.921 "data_size": 0 00:09:49.921 }, 00:09:49.921 { 00:09:49.921 "name": null, 00:09:49.921 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:49.921 "is_configured": false, 00:09:49.921 "data_offset": 0, 00:09:49.921 "data_size": 65536 00:09:49.921 }, 00:09:49.921 { 00:09:49.921 "name": "BaseBdev3", 00:09:49.921 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:49.921 "is_configured": true, 00:09:49.921 "data_offset": 0, 00:09:49.921 "data_size": 65536 00:09:49.921 } 00:09:49.921 ] 00:09:49.921 }' 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.921 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.179 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.179 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.179 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.179 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.179 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.437 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:50.437 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.437 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.437 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.438 [2024-11-20 10:32:53.718067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.438 BaseBdev1 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.438 [ 00:09:50.438 { 00:09:50.438 "name": "BaseBdev1", 00:09:50.438 "aliases": [ 00:09:50.438 "95190234-d1c5-4638-bf3e-c4cf94ecf980" 00:09:50.438 ], 00:09:50.438 "product_name": "Malloc disk", 00:09:50.438 "block_size": 512, 00:09:50.438 "num_blocks": 65536, 00:09:50.438 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:50.438 "assigned_rate_limits": { 00:09:50.438 "rw_ios_per_sec": 0, 00:09:50.438 "rw_mbytes_per_sec": 0, 00:09:50.438 "r_mbytes_per_sec": 0, 00:09:50.438 "w_mbytes_per_sec": 0 00:09:50.438 }, 00:09:50.438 "claimed": true, 00:09:50.438 "claim_type": "exclusive_write", 00:09:50.438 "zoned": false, 00:09:50.438 "supported_io_types": { 00:09:50.438 "read": true, 00:09:50.438 "write": true, 00:09:50.438 "unmap": true, 00:09:50.438 "flush": true, 00:09:50.438 "reset": true, 00:09:50.438 "nvme_admin": false, 00:09:50.438 "nvme_io": false, 00:09:50.438 "nvme_io_md": false, 00:09:50.438 "write_zeroes": true, 00:09:50.438 "zcopy": true, 00:09:50.438 "get_zone_info": false, 00:09:50.438 "zone_management": false, 00:09:50.438 "zone_append": false, 00:09:50.438 "compare": false, 00:09:50.438 "compare_and_write": false, 00:09:50.438 "abort": true, 00:09:50.438 "seek_hole": false, 00:09:50.438 "seek_data": false, 00:09:50.438 "copy": true, 00:09:50.438 "nvme_iov_md": false 00:09:50.438 }, 00:09:50.438 "memory_domains": [ 00:09:50.438 { 00:09:50.438 "dma_device_id": "system", 00:09:50.438 "dma_device_type": 1 00:09:50.438 }, 00:09:50.438 { 00:09:50.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.438 "dma_device_type": 2 00:09:50.438 } 00:09:50.438 ], 00:09:50.438 "driver_specific": {} 00:09:50.438 } 00:09:50.438 ] 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.438 "name": "Existed_Raid", 00:09:50.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.438 "strip_size_kb": 64, 00:09:50.438 "state": "configuring", 00:09:50.438 "raid_level": "concat", 00:09:50.438 "superblock": false, 00:09:50.438 "num_base_bdevs": 3, 00:09:50.438 "num_base_bdevs_discovered": 2, 00:09:50.438 "num_base_bdevs_operational": 3, 00:09:50.438 "base_bdevs_list": [ 00:09:50.438 { 00:09:50.438 "name": "BaseBdev1", 00:09:50.438 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:50.438 "is_configured": true, 00:09:50.438 "data_offset": 0, 00:09:50.438 "data_size": 65536 00:09:50.438 }, 00:09:50.438 { 00:09:50.438 "name": null, 00:09:50.438 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:50.438 "is_configured": false, 00:09:50.438 "data_offset": 0, 00:09:50.438 "data_size": 65536 00:09:50.438 }, 00:09:50.438 { 00:09:50.438 "name": "BaseBdev3", 00:09:50.438 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:50.438 "is_configured": true, 00:09:50.438 "data_offset": 0, 00:09:50.438 "data_size": 65536 00:09:50.438 } 00:09:50.438 ] 00:09:50.438 }' 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.438 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.696 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.696 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.696 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.696 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.955 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.956 [2024-11-20 10:32:54.217308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.956 "name": "Existed_Raid", 00:09:50.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.956 "strip_size_kb": 64, 00:09:50.956 "state": "configuring", 00:09:50.956 "raid_level": "concat", 00:09:50.956 "superblock": false, 00:09:50.956 "num_base_bdevs": 3, 00:09:50.956 "num_base_bdevs_discovered": 1, 00:09:50.956 "num_base_bdevs_operational": 3, 00:09:50.956 "base_bdevs_list": [ 00:09:50.956 { 00:09:50.956 "name": "BaseBdev1", 00:09:50.956 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:50.956 "is_configured": true, 00:09:50.956 "data_offset": 0, 00:09:50.956 "data_size": 65536 00:09:50.956 }, 00:09:50.956 { 00:09:50.956 "name": null, 00:09:50.956 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:50.956 "is_configured": false, 00:09:50.956 "data_offset": 0, 00:09:50.956 "data_size": 65536 00:09:50.956 }, 00:09:50.956 { 00:09:50.956 "name": null, 00:09:50.956 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:50.956 "is_configured": false, 00:09:50.956 "data_offset": 0, 00:09:50.956 "data_size": 65536 00:09:50.956 } 00:09:50.956 ] 00:09:50.956 }' 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.956 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.526 [2024-11-20 10:32:54.764434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.526 "name": "Existed_Raid", 00:09:51.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.526 "strip_size_kb": 64, 00:09:51.526 "state": "configuring", 00:09:51.526 "raid_level": "concat", 00:09:51.526 "superblock": false, 00:09:51.526 "num_base_bdevs": 3, 00:09:51.526 "num_base_bdevs_discovered": 2, 00:09:51.526 "num_base_bdevs_operational": 3, 00:09:51.526 "base_bdevs_list": [ 00:09:51.526 { 00:09:51.526 "name": "BaseBdev1", 00:09:51.526 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:51.526 "is_configured": true, 00:09:51.526 "data_offset": 0, 00:09:51.526 "data_size": 65536 00:09:51.526 }, 00:09:51.526 { 00:09:51.526 "name": null, 00:09:51.526 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:51.526 "is_configured": false, 00:09:51.526 "data_offset": 0, 00:09:51.526 "data_size": 65536 00:09:51.526 }, 00:09:51.526 { 00:09:51.526 "name": "BaseBdev3", 00:09:51.526 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:51.526 "is_configured": true, 00:09:51.526 "data_offset": 0, 00:09:51.526 "data_size": 65536 00:09:51.526 } 00:09:51.526 ] 00:09:51.526 }' 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.526 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.792 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.792 [2024-11-20 10:32:55.251631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.060 "name": "Existed_Raid", 00:09:52.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.060 "strip_size_kb": 64, 00:09:52.060 "state": "configuring", 00:09:52.060 "raid_level": "concat", 00:09:52.060 "superblock": false, 00:09:52.060 "num_base_bdevs": 3, 00:09:52.060 "num_base_bdevs_discovered": 1, 00:09:52.060 "num_base_bdevs_operational": 3, 00:09:52.060 "base_bdevs_list": [ 00:09:52.060 { 00:09:52.060 "name": null, 00:09:52.060 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:52.060 "is_configured": false, 00:09:52.060 "data_offset": 0, 00:09:52.060 "data_size": 65536 00:09:52.060 }, 00:09:52.060 { 00:09:52.060 "name": null, 00:09:52.060 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:52.060 "is_configured": false, 00:09:52.060 "data_offset": 0, 00:09:52.060 "data_size": 65536 00:09:52.060 }, 00:09:52.060 { 00:09:52.060 "name": "BaseBdev3", 00:09:52.060 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:52.060 "is_configured": true, 00:09:52.060 "data_offset": 0, 00:09:52.060 "data_size": 65536 00:09:52.060 } 00:09:52.060 ] 00:09:52.060 }' 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.060 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.630 [2024-11-20 10:32:55.868124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.630 "name": "Existed_Raid", 00:09:52.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.630 "strip_size_kb": 64, 00:09:52.630 "state": "configuring", 00:09:52.630 "raid_level": "concat", 00:09:52.630 "superblock": false, 00:09:52.630 "num_base_bdevs": 3, 00:09:52.630 "num_base_bdevs_discovered": 2, 00:09:52.630 "num_base_bdevs_operational": 3, 00:09:52.630 "base_bdevs_list": [ 00:09:52.630 { 00:09:52.630 "name": null, 00:09:52.630 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:52.630 "is_configured": false, 00:09:52.630 "data_offset": 0, 00:09:52.630 "data_size": 65536 00:09:52.630 }, 00:09:52.630 { 00:09:52.630 "name": "BaseBdev2", 00:09:52.630 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:52.630 "is_configured": true, 00:09:52.630 "data_offset": 0, 00:09:52.630 "data_size": 65536 00:09:52.630 }, 00:09:52.630 { 00:09:52.630 "name": "BaseBdev3", 00:09:52.630 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:52.630 "is_configured": true, 00:09:52.630 "data_offset": 0, 00:09:52.630 "data_size": 65536 00:09:52.630 } 00:09:52.630 ] 00:09:52.630 }' 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.630 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.891 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.891 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.891 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.891 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.891 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.891 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 95190234-d1c5-4638-bf3e-c4cf94ecf980 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.151 [2024-11-20 10:32:56.465414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:53.151 [2024-11-20 10:32:56.465472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:53.151 [2024-11-20 10:32:56.465483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:53.151 [2024-11-20 10:32:56.465770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:53.151 [2024-11-20 10:32:56.465953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:53.151 [2024-11-20 10:32:56.465963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:53.151 [2024-11-20 10:32:56.466259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.151 NewBaseBdev 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.151 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.152 [ 00:09:53.152 { 00:09:53.152 "name": "NewBaseBdev", 00:09:53.152 "aliases": [ 00:09:53.152 "95190234-d1c5-4638-bf3e-c4cf94ecf980" 00:09:53.152 ], 00:09:53.152 "product_name": "Malloc disk", 00:09:53.152 "block_size": 512, 00:09:53.152 "num_blocks": 65536, 00:09:53.152 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:53.152 "assigned_rate_limits": { 00:09:53.152 "rw_ios_per_sec": 0, 00:09:53.152 "rw_mbytes_per_sec": 0, 00:09:53.152 "r_mbytes_per_sec": 0, 00:09:53.152 "w_mbytes_per_sec": 0 00:09:53.152 }, 00:09:53.152 "claimed": true, 00:09:53.152 "claim_type": "exclusive_write", 00:09:53.152 "zoned": false, 00:09:53.152 "supported_io_types": { 00:09:53.152 "read": true, 00:09:53.152 "write": true, 00:09:53.152 "unmap": true, 00:09:53.152 "flush": true, 00:09:53.152 "reset": true, 00:09:53.152 "nvme_admin": false, 00:09:53.152 "nvme_io": false, 00:09:53.152 "nvme_io_md": false, 00:09:53.152 "write_zeroes": true, 00:09:53.152 "zcopy": true, 00:09:53.152 "get_zone_info": false, 00:09:53.152 "zone_management": false, 00:09:53.152 "zone_append": false, 00:09:53.152 "compare": false, 00:09:53.152 "compare_and_write": false, 00:09:53.152 "abort": true, 00:09:53.152 "seek_hole": false, 00:09:53.152 "seek_data": false, 00:09:53.152 "copy": true, 00:09:53.152 "nvme_iov_md": false 00:09:53.152 }, 00:09:53.152 "memory_domains": [ 00:09:53.152 { 00:09:53.152 "dma_device_id": "system", 00:09:53.152 "dma_device_type": 1 00:09:53.152 }, 00:09:53.152 { 00:09:53.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.152 "dma_device_type": 2 00:09:53.152 } 00:09:53.152 ], 00:09:53.152 "driver_specific": {} 00:09:53.152 } 00:09:53.152 ] 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.152 "name": "Existed_Raid", 00:09:53.152 "uuid": "ddfb565f-68e3-44a6-a0ea-ce34a2004f72", 00:09:53.152 "strip_size_kb": 64, 00:09:53.152 "state": "online", 00:09:53.152 "raid_level": "concat", 00:09:53.152 "superblock": false, 00:09:53.152 "num_base_bdevs": 3, 00:09:53.152 "num_base_bdevs_discovered": 3, 00:09:53.152 "num_base_bdevs_operational": 3, 00:09:53.152 "base_bdevs_list": [ 00:09:53.152 { 00:09:53.152 "name": "NewBaseBdev", 00:09:53.152 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:53.152 "is_configured": true, 00:09:53.152 "data_offset": 0, 00:09:53.152 "data_size": 65536 00:09:53.152 }, 00:09:53.152 { 00:09:53.152 "name": "BaseBdev2", 00:09:53.152 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:53.152 "is_configured": true, 00:09:53.152 "data_offset": 0, 00:09:53.152 "data_size": 65536 00:09:53.152 }, 00:09:53.152 { 00:09:53.152 "name": "BaseBdev3", 00:09:53.152 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:53.152 "is_configured": true, 00:09:53.152 "data_offset": 0, 00:09:53.152 "data_size": 65536 00:09:53.152 } 00:09:53.152 ] 00:09:53.152 }' 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.152 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.724 10:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.724 [2024-11-20 10:32:57.000957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.724 "name": "Existed_Raid", 00:09:53.724 "aliases": [ 00:09:53.724 "ddfb565f-68e3-44a6-a0ea-ce34a2004f72" 00:09:53.724 ], 00:09:53.724 "product_name": "Raid Volume", 00:09:53.724 "block_size": 512, 00:09:53.724 "num_blocks": 196608, 00:09:53.724 "uuid": "ddfb565f-68e3-44a6-a0ea-ce34a2004f72", 00:09:53.724 "assigned_rate_limits": { 00:09:53.724 "rw_ios_per_sec": 0, 00:09:53.724 "rw_mbytes_per_sec": 0, 00:09:53.724 "r_mbytes_per_sec": 0, 00:09:53.724 "w_mbytes_per_sec": 0 00:09:53.724 }, 00:09:53.724 "claimed": false, 00:09:53.724 "zoned": false, 00:09:53.724 "supported_io_types": { 00:09:53.724 "read": true, 00:09:53.724 "write": true, 00:09:53.724 "unmap": true, 00:09:53.724 "flush": true, 00:09:53.724 "reset": true, 00:09:53.724 "nvme_admin": false, 00:09:53.724 "nvme_io": false, 00:09:53.724 "nvme_io_md": false, 00:09:53.724 "write_zeroes": true, 00:09:53.724 "zcopy": false, 00:09:53.724 "get_zone_info": false, 00:09:53.724 "zone_management": false, 00:09:53.724 "zone_append": false, 00:09:53.724 "compare": false, 00:09:53.724 "compare_and_write": false, 00:09:53.724 "abort": false, 00:09:53.724 "seek_hole": false, 00:09:53.724 "seek_data": false, 00:09:53.724 "copy": false, 00:09:53.724 "nvme_iov_md": false 00:09:53.724 }, 00:09:53.724 "memory_domains": [ 00:09:53.724 { 00:09:53.724 "dma_device_id": "system", 00:09:53.724 "dma_device_type": 1 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.724 "dma_device_type": 2 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "dma_device_id": "system", 00:09:53.724 "dma_device_type": 1 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.724 "dma_device_type": 2 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "dma_device_id": "system", 00:09:53.724 "dma_device_type": 1 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.724 "dma_device_type": 2 00:09:53.724 } 00:09:53.724 ], 00:09:53.724 "driver_specific": { 00:09:53.724 "raid": { 00:09:53.724 "uuid": "ddfb565f-68e3-44a6-a0ea-ce34a2004f72", 00:09:53.724 "strip_size_kb": 64, 00:09:53.724 "state": "online", 00:09:53.724 "raid_level": "concat", 00:09:53.724 "superblock": false, 00:09:53.724 "num_base_bdevs": 3, 00:09:53.724 "num_base_bdevs_discovered": 3, 00:09:53.724 "num_base_bdevs_operational": 3, 00:09:53.724 "base_bdevs_list": [ 00:09:53.724 { 00:09:53.724 "name": "NewBaseBdev", 00:09:53.724 "uuid": "95190234-d1c5-4638-bf3e-c4cf94ecf980", 00:09:53.724 "is_configured": true, 00:09:53.724 "data_offset": 0, 00:09:53.724 "data_size": 65536 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "name": "BaseBdev2", 00:09:53.724 "uuid": "527eeda9-13f0-44dd-9e59-270a887b222b", 00:09:53.724 "is_configured": true, 00:09:53.724 "data_offset": 0, 00:09:53.724 "data_size": 65536 00:09:53.724 }, 00:09:53.724 { 00:09:53.724 "name": "BaseBdev3", 00:09:53.724 "uuid": "7ed02211-bd58-4441-a32d-2f3c5fd95510", 00:09:53.724 "is_configured": true, 00:09:53.724 "data_offset": 0, 00:09:53.724 "data_size": 65536 00:09:53.724 } 00:09:53.724 ] 00:09:53.724 } 00:09:53.724 } 00:09:53.724 }' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:53.724 BaseBdev2 00:09:53.724 BaseBdev3' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.724 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.725 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.725 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.984 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.984 [2024-11-20 10:32:57.276094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.984 [2024-11-20 10:32:57.276191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.984 [2024-11-20 10:32:57.276324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.985 [2024-11-20 10:32:57.276479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.985 [2024-11-20 10:32:57.276561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65781 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65781 ']' 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65781 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65781 00:09:53.985 killing process with pid 65781 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65781' 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65781 00:09:53.985 [2024-11-20 10:32:57.311522] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.985 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65781 00:09:54.243 [2024-11-20 10:32:57.620620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.623 00:09:55.623 real 0m11.081s 00:09:55.623 user 0m17.600s 00:09:55.623 sys 0m1.922s 00:09:55.623 ************************************ 00:09:55.623 END TEST raid_state_function_test 00:09:55.623 ************************************ 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 10:32:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:55.623 10:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.623 10:32:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.623 10:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 ************************************ 00:09:55.623 START TEST raid_state_function_test_sb 00:09:55.623 ************************************ 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:55.623 Process raid pid: 66408 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66408 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66408' 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66408 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66408 ']' 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.623 10:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 [2024-11-20 10:32:58.951660] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:55.623 [2024-11-20 10:32:58.951940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.883 [2024-11-20 10:32:59.140232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.883 [2024-11-20 10:32:59.262121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.141 [2024-11-20 10:32:59.467635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.141 [2024-11-20 10:32:59.467672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.399 [2024-11-20 10:32:59.768585] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.399 [2024-11-20 10:32:59.768649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.399 [2024-11-20 10:32:59.768666] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.399 [2024-11-20 10:32:59.768683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.399 [2024-11-20 10:32:59.768694] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.399 [2024-11-20 10:32:59.768709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.399 "name": "Existed_Raid", 00:09:56.399 "uuid": "9128f25e-1a91-4ada-a060-180a137ab23f", 00:09:56.399 "strip_size_kb": 64, 00:09:56.399 "state": "configuring", 00:09:56.399 "raid_level": "concat", 00:09:56.399 "superblock": true, 00:09:56.399 "num_base_bdevs": 3, 00:09:56.399 "num_base_bdevs_discovered": 0, 00:09:56.399 "num_base_bdevs_operational": 3, 00:09:56.399 "base_bdevs_list": [ 00:09:56.399 { 00:09:56.399 "name": "BaseBdev1", 00:09:56.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.399 "is_configured": false, 00:09:56.399 "data_offset": 0, 00:09:56.399 "data_size": 0 00:09:56.399 }, 00:09:56.399 { 00:09:56.399 "name": "BaseBdev2", 00:09:56.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.399 "is_configured": false, 00:09:56.399 "data_offset": 0, 00:09:56.399 "data_size": 0 00:09:56.399 }, 00:09:56.399 { 00:09:56.399 "name": "BaseBdev3", 00:09:56.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.399 "is_configured": false, 00:09:56.399 "data_offset": 0, 00:09:56.399 "data_size": 0 00:09:56.399 } 00:09:56.399 ] 00:09:56.399 }' 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.399 10:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 [2024-11-20 10:33:00.195846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.969 [2024-11-20 10:33:00.195967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 [2024-11-20 10:33:00.207820] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.969 [2024-11-20 10:33:00.207932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.969 [2024-11-20 10:33:00.207974] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.969 [2024-11-20 10:33:00.208008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.969 [2024-11-20 10:33:00.208087] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.969 [2024-11-20 10:33:00.208132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 [2024-11-20 10:33:00.256936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.969 BaseBdev1 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 [ 00:09:56.969 { 00:09:56.969 "name": "BaseBdev1", 00:09:56.969 "aliases": [ 00:09:56.969 "42cfa478-7fd0-4239-8abb-c99fa36fdd38" 00:09:56.969 ], 00:09:56.969 "product_name": "Malloc disk", 00:09:56.969 "block_size": 512, 00:09:56.969 "num_blocks": 65536, 00:09:56.969 "uuid": "42cfa478-7fd0-4239-8abb-c99fa36fdd38", 00:09:56.969 "assigned_rate_limits": { 00:09:56.969 "rw_ios_per_sec": 0, 00:09:56.969 "rw_mbytes_per_sec": 0, 00:09:56.969 "r_mbytes_per_sec": 0, 00:09:56.969 "w_mbytes_per_sec": 0 00:09:56.969 }, 00:09:56.969 "claimed": true, 00:09:56.969 "claim_type": "exclusive_write", 00:09:56.969 "zoned": false, 00:09:56.969 "supported_io_types": { 00:09:56.969 "read": true, 00:09:56.969 "write": true, 00:09:56.969 "unmap": true, 00:09:56.969 "flush": true, 00:09:56.969 "reset": true, 00:09:56.969 "nvme_admin": false, 00:09:56.969 "nvme_io": false, 00:09:56.969 "nvme_io_md": false, 00:09:56.969 "write_zeroes": true, 00:09:56.969 "zcopy": true, 00:09:56.969 "get_zone_info": false, 00:09:56.969 "zone_management": false, 00:09:56.969 "zone_append": false, 00:09:56.969 "compare": false, 00:09:56.969 "compare_and_write": false, 00:09:56.969 "abort": true, 00:09:56.969 "seek_hole": false, 00:09:56.969 "seek_data": false, 00:09:56.969 "copy": true, 00:09:56.969 "nvme_iov_md": false 00:09:56.969 }, 00:09:56.969 "memory_domains": [ 00:09:56.969 { 00:09:56.969 "dma_device_id": "system", 00:09:56.969 "dma_device_type": 1 00:09:56.969 }, 00:09:56.969 { 00:09:56.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.969 "dma_device_type": 2 00:09:56.969 } 00:09:56.969 ], 00:09:56.969 "driver_specific": {} 00:09:56.969 } 00:09:56.969 ] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.969 "name": "Existed_Raid", 00:09:56.969 "uuid": "ef4428e8-5a87-4a51-8cdf-223a884965f4", 00:09:56.969 "strip_size_kb": 64, 00:09:56.969 "state": "configuring", 00:09:56.969 "raid_level": "concat", 00:09:56.969 "superblock": true, 00:09:56.969 "num_base_bdevs": 3, 00:09:56.969 "num_base_bdevs_discovered": 1, 00:09:56.969 "num_base_bdevs_operational": 3, 00:09:56.969 "base_bdevs_list": [ 00:09:56.969 { 00:09:56.969 "name": "BaseBdev1", 00:09:56.969 "uuid": "42cfa478-7fd0-4239-8abb-c99fa36fdd38", 00:09:56.969 "is_configured": true, 00:09:56.969 "data_offset": 2048, 00:09:56.969 "data_size": 63488 00:09:56.969 }, 00:09:56.969 { 00:09:56.969 "name": "BaseBdev2", 00:09:56.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.969 "is_configured": false, 00:09:56.969 "data_offset": 0, 00:09:56.969 "data_size": 0 00:09:56.969 }, 00:09:56.969 { 00:09:56.969 "name": "BaseBdev3", 00:09:56.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.969 "is_configured": false, 00:09:56.969 "data_offset": 0, 00:09:56.969 "data_size": 0 00:09:56.969 } 00:09:56.969 ] 00:09:56.969 }' 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.969 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.538 [2024-11-20 10:33:00.736245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.538 [2024-11-20 10:33:00.736414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.538 [2024-11-20 10:33:00.748317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.538 [2024-11-20 10:33:00.750454] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.538 [2024-11-20 10:33:00.750507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.538 [2024-11-20 10:33:00.750521] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.538 [2024-11-20 10:33:00.750534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.538 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.538 "name": "Existed_Raid", 00:09:57.538 "uuid": "db5c6611-1db6-4d5a-8924-4ebb61c70bb1", 00:09:57.538 "strip_size_kb": 64, 00:09:57.538 "state": "configuring", 00:09:57.538 "raid_level": "concat", 00:09:57.538 "superblock": true, 00:09:57.538 "num_base_bdevs": 3, 00:09:57.538 "num_base_bdevs_discovered": 1, 00:09:57.538 "num_base_bdevs_operational": 3, 00:09:57.538 "base_bdevs_list": [ 00:09:57.538 { 00:09:57.538 "name": "BaseBdev1", 00:09:57.538 "uuid": "42cfa478-7fd0-4239-8abb-c99fa36fdd38", 00:09:57.538 "is_configured": true, 00:09:57.538 "data_offset": 2048, 00:09:57.538 "data_size": 63488 00:09:57.538 }, 00:09:57.538 { 00:09:57.538 "name": "BaseBdev2", 00:09:57.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.538 "is_configured": false, 00:09:57.538 "data_offset": 0, 00:09:57.538 "data_size": 0 00:09:57.538 }, 00:09:57.538 { 00:09:57.538 "name": "BaseBdev3", 00:09:57.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.538 "is_configured": false, 00:09:57.538 "data_offset": 0, 00:09:57.538 "data_size": 0 00:09:57.538 } 00:09:57.538 ] 00:09:57.538 }' 00:09:57.539 10:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.539 10:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.798 [2024-11-20 10:33:01.257842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.798 BaseBdev2 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.798 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.057 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.057 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.057 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.057 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.057 [ 00:09:58.057 { 00:09:58.057 "name": "BaseBdev2", 00:09:58.057 "aliases": [ 00:09:58.057 "c448e620-2bda-48d5-949e-836a9bfdde3c" 00:09:58.057 ], 00:09:58.057 "product_name": "Malloc disk", 00:09:58.057 "block_size": 512, 00:09:58.057 "num_blocks": 65536, 00:09:58.057 "uuid": "c448e620-2bda-48d5-949e-836a9bfdde3c", 00:09:58.057 "assigned_rate_limits": { 00:09:58.057 "rw_ios_per_sec": 0, 00:09:58.057 "rw_mbytes_per_sec": 0, 00:09:58.057 "r_mbytes_per_sec": 0, 00:09:58.057 "w_mbytes_per_sec": 0 00:09:58.057 }, 00:09:58.057 "claimed": true, 00:09:58.057 "claim_type": "exclusive_write", 00:09:58.057 "zoned": false, 00:09:58.057 "supported_io_types": { 00:09:58.057 "read": true, 00:09:58.057 "write": true, 00:09:58.057 "unmap": true, 00:09:58.057 "flush": true, 00:09:58.057 "reset": true, 00:09:58.057 "nvme_admin": false, 00:09:58.057 "nvme_io": false, 00:09:58.057 "nvme_io_md": false, 00:09:58.057 "write_zeroes": true, 00:09:58.057 "zcopy": true, 00:09:58.057 "get_zone_info": false, 00:09:58.057 "zone_management": false, 00:09:58.057 "zone_append": false, 00:09:58.057 "compare": false, 00:09:58.057 "compare_and_write": false, 00:09:58.057 "abort": true, 00:09:58.057 "seek_hole": false, 00:09:58.057 "seek_data": false, 00:09:58.057 "copy": true, 00:09:58.057 "nvme_iov_md": false 00:09:58.057 }, 00:09:58.057 "memory_domains": [ 00:09:58.057 { 00:09:58.057 "dma_device_id": "system", 00:09:58.057 "dma_device_type": 1 00:09:58.057 }, 00:09:58.057 { 00:09:58.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.057 "dma_device_type": 2 00:09:58.058 } 00:09:58.058 ], 00:09:58.058 "driver_specific": {} 00:09:58.058 } 00:09:58.058 ] 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.058 "name": "Existed_Raid", 00:09:58.058 "uuid": "db5c6611-1db6-4d5a-8924-4ebb61c70bb1", 00:09:58.058 "strip_size_kb": 64, 00:09:58.058 "state": "configuring", 00:09:58.058 "raid_level": "concat", 00:09:58.058 "superblock": true, 00:09:58.058 "num_base_bdevs": 3, 00:09:58.058 "num_base_bdevs_discovered": 2, 00:09:58.058 "num_base_bdevs_operational": 3, 00:09:58.058 "base_bdevs_list": [ 00:09:58.058 { 00:09:58.058 "name": "BaseBdev1", 00:09:58.058 "uuid": "42cfa478-7fd0-4239-8abb-c99fa36fdd38", 00:09:58.058 "is_configured": true, 00:09:58.058 "data_offset": 2048, 00:09:58.058 "data_size": 63488 00:09:58.058 }, 00:09:58.058 { 00:09:58.058 "name": "BaseBdev2", 00:09:58.058 "uuid": "c448e620-2bda-48d5-949e-836a9bfdde3c", 00:09:58.058 "is_configured": true, 00:09:58.058 "data_offset": 2048, 00:09:58.058 "data_size": 63488 00:09:58.058 }, 00:09:58.058 { 00:09:58.058 "name": "BaseBdev3", 00:09:58.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.058 "is_configured": false, 00:09:58.058 "data_offset": 0, 00:09:58.058 "data_size": 0 00:09:58.058 } 00:09:58.058 ] 00:09:58.058 }' 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.058 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.317 [2024-11-20 10:33:01.771993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.317 [2024-11-20 10:33:01.772490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.317 [2024-11-20 10:33:01.772572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:58.317 [2024-11-20 10:33:01.772959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:58.317 BaseBdev3 00:09:58.317 [2024-11-20 10:33:01.773247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.317 [2024-11-20 10:33:01.773309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:58.317 [2024-11-20 10:33:01.773562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.317 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.575 [ 00:09:58.576 { 00:09:58.576 "name": "BaseBdev3", 00:09:58.576 "aliases": [ 00:09:58.576 "bbf0ccfc-10c8-4044-95a3-93eff6d83ce5" 00:09:58.576 ], 00:09:58.576 "product_name": "Malloc disk", 00:09:58.576 "block_size": 512, 00:09:58.576 "num_blocks": 65536, 00:09:58.576 "uuid": "bbf0ccfc-10c8-4044-95a3-93eff6d83ce5", 00:09:58.576 "assigned_rate_limits": { 00:09:58.576 "rw_ios_per_sec": 0, 00:09:58.576 "rw_mbytes_per_sec": 0, 00:09:58.576 "r_mbytes_per_sec": 0, 00:09:58.576 "w_mbytes_per_sec": 0 00:09:58.576 }, 00:09:58.576 "claimed": true, 00:09:58.576 "claim_type": "exclusive_write", 00:09:58.576 "zoned": false, 00:09:58.576 "supported_io_types": { 00:09:58.576 "read": true, 00:09:58.576 "write": true, 00:09:58.576 "unmap": true, 00:09:58.576 "flush": true, 00:09:58.576 "reset": true, 00:09:58.576 "nvme_admin": false, 00:09:58.576 "nvme_io": false, 00:09:58.576 "nvme_io_md": false, 00:09:58.576 "write_zeroes": true, 00:09:58.576 "zcopy": true, 00:09:58.576 "get_zone_info": false, 00:09:58.576 "zone_management": false, 00:09:58.576 "zone_append": false, 00:09:58.576 "compare": false, 00:09:58.576 "compare_and_write": false, 00:09:58.576 "abort": true, 00:09:58.576 "seek_hole": false, 00:09:58.576 "seek_data": false, 00:09:58.576 "copy": true, 00:09:58.576 "nvme_iov_md": false 00:09:58.576 }, 00:09:58.576 "memory_domains": [ 00:09:58.576 { 00:09:58.576 "dma_device_id": "system", 00:09:58.576 "dma_device_type": 1 00:09:58.576 }, 00:09:58.576 { 00:09:58.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.576 "dma_device_type": 2 00:09:58.576 } 00:09:58.576 ], 00:09:58.576 "driver_specific": {} 00:09:58.576 } 00:09:58.576 ] 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.576 "name": "Existed_Raid", 00:09:58.576 "uuid": "db5c6611-1db6-4d5a-8924-4ebb61c70bb1", 00:09:58.576 "strip_size_kb": 64, 00:09:58.576 "state": "online", 00:09:58.576 "raid_level": "concat", 00:09:58.576 "superblock": true, 00:09:58.576 "num_base_bdevs": 3, 00:09:58.576 "num_base_bdevs_discovered": 3, 00:09:58.576 "num_base_bdevs_operational": 3, 00:09:58.576 "base_bdevs_list": [ 00:09:58.576 { 00:09:58.576 "name": "BaseBdev1", 00:09:58.576 "uuid": "42cfa478-7fd0-4239-8abb-c99fa36fdd38", 00:09:58.576 "is_configured": true, 00:09:58.576 "data_offset": 2048, 00:09:58.576 "data_size": 63488 00:09:58.576 }, 00:09:58.576 { 00:09:58.576 "name": "BaseBdev2", 00:09:58.576 "uuid": "c448e620-2bda-48d5-949e-836a9bfdde3c", 00:09:58.576 "is_configured": true, 00:09:58.576 "data_offset": 2048, 00:09:58.576 "data_size": 63488 00:09:58.576 }, 00:09:58.576 { 00:09:58.576 "name": "BaseBdev3", 00:09:58.576 "uuid": "bbf0ccfc-10c8-4044-95a3-93eff6d83ce5", 00:09:58.576 "is_configured": true, 00:09:58.576 "data_offset": 2048, 00:09:58.576 "data_size": 63488 00:09:58.576 } 00:09:58.576 ] 00:09:58.576 }' 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.576 10:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.834 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.834 [2024-11-20 10:33:02.307533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.093 "name": "Existed_Raid", 00:09:59.093 "aliases": [ 00:09:59.093 "db5c6611-1db6-4d5a-8924-4ebb61c70bb1" 00:09:59.093 ], 00:09:59.093 "product_name": "Raid Volume", 00:09:59.093 "block_size": 512, 00:09:59.093 "num_blocks": 190464, 00:09:59.093 "uuid": "db5c6611-1db6-4d5a-8924-4ebb61c70bb1", 00:09:59.093 "assigned_rate_limits": { 00:09:59.093 "rw_ios_per_sec": 0, 00:09:59.093 "rw_mbytes_per_sec": 0, 00:09:59.093 "r_mbytes_per_sec": 0, 00:09:59.093 "w_mbytes_per_sec": 0 00:09:59.093 }, 00:09:59.093 "claimed": false, 00:09:59.093 "zoned": false, 00:09:59.093 "supported_io_types": { 00:09:59.093 "read": true, 00:09:59.093 "write": true, 00:09:59.093 "unmap": true, 00:09:59.093 "flush": true, 00:09:59.093 "reset": true, 00:09:59.093 "nvme_admin": false, 00:09:59.093 "nvme_io": false, 00:09:59.093 "nvme_io_md": false, 00:09:59.093 "write_zeroes": true, 00:09:59.093 "zcopy": false, 00:09:59.093 "get_zone_info": false, 00:09:59.093 "zone_management": false, 00:09:59.093 "zone_append": false, 00:09:59.093 "compare": false, 00:09:59.093 "compare_and_write": false, 00:09:59.093 "abort": false, 00:09:59.093 "seek_hole": false, 00:09:59.093 "seek_data": false, 00:09:59.093 "copy": false, 00:09:59.093 "nvme_iov_md": false 00:09:59.093 }, 00:09:59.093 "memory_domains": [ 00:09:59.093 { 00:09:59.093 "dma_device_id": "system", 00:09:59.093 "dma_device_type": 1 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.093 "dma_device_type": 2 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "dma_device_id": "system", 00:09:59.093 "dma_device_type": 1 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.093 "dma_device_type": 2 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "dma_device_id": "system", 00:09:59.093 "dma_device_type": 1 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.093 "dma_device_type": 2 00:09:59.093 } 00:09:59.093 ], 00:09:59.093 "driver_specific": { 00:09:59.093 "raid": { 00:09:59.093 "uuid": "db5c6611-1db6-4d5a-8924-4ebb61c70bb1", 00:09:59.093 "strip_size_kb": 64, 00:09:59.093 "state": "online", 00:09:59.093 "raid_level": "concat", 00:09:59.093 "superblock": true, 00:09:59.093 "num_base_bdevs": 3, 00:09:59.093 "num_base_bdevs_discovered": 3, 00:09:59.093 "num_base_bdevs_operational": 3, 00:09:59.093 "base_bdevs_list": [ 00:09:59.093 { 00:09:59.093 "name": "BaseBdev1", 00:09:59.093 "uuid": "42cfa478-7fd0-4239-8abb-c99fa36fdd38", 00:09:59.093 "is_configured": true, 00:09:59.093 "data_offset": 2048, 00:09:59.093 "data_size": 63488 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "name": "BaseBdev2", 00:09:59.093 "uuid": "c448e620-2bda-48d5-949e-836a9bfdde3c", 00:09:59.093 "is_configured": true, 00:09:59.093 "data_offset": 2048, 00:09:59.093 "data_size": 63488 00:09:59.093 }, 00:09:59.093 { 00:09:59.093 "name": "BaseBdev3", 00:09:59.093 "uuid": "bbf0ccfc-10c8-4044-95a3-93eff6d83ce5", 00:09:59.093 "is_configured": true, 00:09:59.093 "data_offset": 2048, 00:09:59.093 "data_size": 63488 00:09:59.093 } 00:09:59.093 ] 00:09:59.093 } 00:09:59.093 } 00:09:59.093 }' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.093 BaseBdev2 00:09:59.093 BaseBdev3' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 [2024-11-20 10:33:02.586768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.352 [2024-11-20 10:33:02.586807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.352 [2024-11-20 10:33:02.586870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.352 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.352 "name": "Existed_Raid", 00:09:59.352 "uuid": "db5c6611-1db6-4d5a-8924-4ebb61c70bb1", 00:09:59.352 "strip_size_kb": 64, 00:09:59.352 "state": "offline", 00:09:59.352 "raid_level": "concat", 00:09:59.352 "superblock": true, 00:09:59.352 "num_base_bdevs": 3, 00:09:59.352 "num_base_bdevs_discovered": 2, 00:09:59.352 "num_base_bdevs_operational": 2, 00:09:59.352 "base_bdevs_list": [ 00:09:59.352 { 00:09:59.352 "name": null, 00:09:59.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.352 "is_configured": false, 00:09:59.352 "data_offset": 0, 00:09:59.352 "data_size": 63488 00:09:59.352 }, 00:09:59.352 { 00:09:59.352 "name": "BaseBdev2", 00:09:59.352 "uuid": "c448e620-2bda-48d5-949e-836a9bfdde3c", 00:09:59.352 "is_configured": true, 00:09:59.352 "data_offset": 2048, 00:09:59.352 "data_size": 63488 00:09:59.352 }, 00:09:59.352 { 00:09:59.353 "name": "BaseBdev3", 00:09:59.353 "uuid": "bbf0ccfc-10c8-4044-95a3-93eff6d83ce5", 00:09:59.353 "is_configured": true, 00:09:59.353 "data_offset": 2048, 00:09:59.353 "data_size": 63488 00:09:59.353 } 00:09:59.353 ] 00:09:59.353 }' 00:09:59.353 10:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.353 10:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.919 [2024-11-20 10:33:03.213101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.919 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.179 [2024-11-20 10:33:03.403552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.179 [2024-11-20 10:33:03.403686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.179 BaseBdev2 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.179 [ 00:10:00.179 { 00:10:00.179 "name": "BaseBdev2", 00:10:00.179 "aliases": [ 00:10:00.179 "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11" 00:10:00.179 ], 00:10:00.179 "product_name": "Malloc disk", 00:10:00.179 "block_size": 512, 00:10:00.179 "num_blocks": 65536, 00:10:00.179 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:00.179 "assigned_rate_limits": { 00:10:00.179 "rw_ios_per_sec": 0, 00:10:00.179 "rw_mbytes_per_sec": 0, 00:10:00.179 "r_mbytes_per_sec": 0, 00:10:00.179 "w_mbytes_per_sec": 0 00:10:00.179 }, 00:10:00.179 "claimed": false, 00:10:00.179 "zoned": false, 00:10:00.179 "supported_io_types": { 00:10:00.179 "read": true, 00:10:00.179 "write": true, 00:10:00.179 "unmap": true, 00:10:00.179 "flush": true, 00:10:00.179 "reset": true, 00:10:00.179 "nvme_admin": false, 00:10:00.179 "nvme_io": false, 00:10:00.179 "nvme_io_md": false, 00:10:00.179 "write_zeroes": true, 00:10:00.179 "zcopy": true, 00:10:00.179 "get_zone_info": false, 00:10:00.179 "zone_management": false, 00:10:00.179 "zone_append": false, 00:10:00.179 "compare": false, 00:10:00.179 "compare_and_write": false, 00:10:00.179 "abort": true, 00:10:00.179 "seek_hole": false, 00:10:00.179 "seek_data": false, 00:10:00.179 "copy": true, 00:10:00.179 "nvme_iov_md": false 00:10:00.179 }, 00:10:00.179 "memory_domains": [ 00:10:00.179 { 00:10:00.179 "dma_device_id": "system", 00:10:00.179 "dma_device_type": 1 00:10:00.179 }, 00:10:00.179 { 00:10:00.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.179 "dma_device_type": 2 00:10:00.179 } 00:10:00.179 ], 00:10:00.179 "driver_specific": {} 00:10:00.179 } 00:10:00.179 ] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.179 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.438 BaseBdev3 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.438 [ 00:10:00.438 { 00:10:00.438 "name": "BaseBdev3", 00:10:00.438 "aliases": [ 00:10:00.438 "098c8a34-1906-4654-be83-99200e9b6470" 00:10:00.438 ], 00:10:00.438 "product_name": "Malloc disk", 00:10:00.438 "block_size": 512, 00:10:00.438 "num_blocks": 65536, 00:10:00.438 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:00.438 "assigned_rate_limits": { 00:10:00.438 "rw_ios_per_sec": 0, 00:10:00.438 "rw_mbytes_per_sec": 0, 00:10:00.438 "r_mbytes_per_sec": 0, 00:10:00.438 "w_mbytes_per_sec": 0 00:10:00.438 }, 00:10:00.438 "claimed": false, 00:10:00.438 "zoned": false, 00:10:00.438 "supported_io_types": { 00:10:00.438 "read": true, 00:10:00.438 "write": true, 00:10:00.438 "unmap": true, 00:10:00.438 "flush": true, 00:10:00.438 "reset": true, 00:10:00.438 "nvme_admin": false, 00:10:00.438 "nvme_io": false, 00:10:00.438 "nvme_io_md": false, 00:10:00.438 "write_zeroes": true, 00:10:00.438 "zcopy": true, 00:10:00.438 "get_zone_info": false, 00:10:00.438 "zone_management": false, 00:10:00.438 "zone_append": false, 00:10:00.438 "compare": false, 00:10:00.438 "compare_and_write": false, 00:10:00.438 "abort": true, 00:10:00.438 "seek_hole": false, 00:10:00.438 "seek_data": false, 00:10:00.438 "copy": true, 00:10:00.438 "nvme_iov_md": false 00:10:00.438 }, 00:10:00.438 "memory_domains": [ 00:10:00.438 { 00:10:00.438 "dma_device_id": "system", 00:10:00.438 "dma_device_type": 1 00:10:00.438 }, 00:10:00.438 { 00:10:00.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.438 "dma_device_type": 2 00:10:00.438 } 00:10:00.438 ], 00:10:00.438 "driver_specific": {} 00:10:00.438 } 00:10:00.438 ] 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.438 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.439 [2024-11-20 10:33:03.749967] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.439 [2024-11-20 10:33:03.750032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.439 [2024-11-20 10:33:03.750064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.439 [2024-11-20 10:33:03.752123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.439 "name": "Existed_Raid", 00:10:00.439 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:00.439 "strip_size_kb": 64, 00:10:00.439 "state": "configuring", 00:10:00.439 "raid_level": "concat", 00:10:00.439 "superblock": true, 00:10:00.439 "num_base_bdevs": 3, 00:10:00.439 "num_base_bdevs_discovered": 2, 00:10:00.439 "num_base_bdevs_operational": 3, 00:10:00.439 "base_bdevs_list": [ 00:10:00.439 { 00:10:00.439 "name": "BaseBdev1", 00:10:00.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.439 "is_configured": false, 00:10:00.439 "data_offset": 0, 00:10:00.439 "data_size": 0 00:10:00.439 }, 00:10:00.439 { 00:10:00.439 "name": "BaseBdev2", 00:10:00.439 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:00.439 "is_configured": true, 00:10:00.439 "data_offset": 2048, 00:10:00.439 "data_size": 63488 00:10:00.439 }, 00:10:00.439 { 00:10:00.439 "name": "BaseBdev3", 00:10:00.439 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:00.439 "is_configured": true, 00:10:00.439 "data_offset": 2048, 00:10:00.439 "data_size": 63488 00:10:00.439 } 00:10:00.439 ] 00:10:00.439 }' 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.439 10:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.008 [2024-11-20 10:33:04.233107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.008 "name": "Existed_Raid", 00:10:01.008 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:01.008 "strip_size_kb": 64, 00:10:01.008 "state": "configuring", 00:10:01.008 "raid_level": "concat", 00:10:01.008 "superblock": true, 00:10:01.008 "num_base_bdevs": 3, 00:10:01.008 "num_base_bdevs_discovered": 1, 00:10:01.008 "num_base_bdevs_operational": 3, 00:10:01.008 "base_bdevs_list": [ 00:10:01.008 { 00:10:01.008 "name": "BaseBdev1", 00:10:01.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.008 "is_configured": false, 00:10:01.008 "data_offset": 0, 00:10:01.008 "data_size": 0 00:10:01.008 }, 00:10:01.008 { 00:10:01.008 "name": null, 00:10:01.008 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:01.008 "is_configured": false, 00:10:01.008 "data_offset": 0, 00:10:01.008 "data_size": 63488 00:10:01.008 }, 00:10:01.008 { 00:10:01.008 "name": "BaseBdev3", 00:10:01.008 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:01.008 "is_configured": true, 00:10:01.008 "data_offset": 2048, 00:10:01.008 "data_size": 63488 00:10:01.008 } 00:10:01.008 ] 00:10:01.008 }' 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.008 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.266 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.525 [2024-11-20 10:33:04.757196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.525 BaseBdev1 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.525 [ 00:10:01.525 { 00:10:01.525 "name": "BaseBdev1", 00:10:01.525 "aliases": [ 00:10:01.525 "45361244-3f87-437e-a79c-4f498436f0f3" 00:10:01.525 ], 00:10:01.525 "product_name": "Malloc disk", 00:10:01.525 "block_size": 512, 00:10:01.525 "num_blocks": 65536, 00:10:01.525 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:01.525 "assigned_rate_limits": { 00:10:01.525 "rw_ios_per_sec": 0, 00:10:01.525 "rw_mbytes_per_sec": 0, 00:10:01.525 "r_mbytes_per_sec": 0, 00:10:01.525 "w_mbytes_per_sec": 0 00:10:01.525 }, 00:10:01.525 "claimed": true, 00:10:01.525 "claim_type": "exclusive_write", 00:10:01.525 "zoned": false, 00:10:01.525 "supported_io_types": { 00:10:01.525 "read": true, 00:10:01.525 "write": true, 00:10:01.525 "unmap": true, 00:10:01.525 "flush": true, 00:10:01.525 "reset": true, 00:10:01.525 "nvme_admin": false, 00:10:01.525 "nvme_io": false, 00:10:01.525 "nvme_io_md": false, 00:10:01.525 "write_zeroes": true, 00:10:01.525 "zcopy": true, 00:10:01.525 "get_zone_info": false, 00:10:01.525 "zone_management": false, 00:10:01.525 "zone_append": false, 00:10:01.525 "compare": false, 00:10:01.525 "compare_and_write": false, 00:10:01.525 "abort": true, 00:10:01.525 "seek_hole": false, 00:10:01.525 "seek_data": false, 00:10:01.525 "copy": true, 00:10:01.525 "nvme_iov_md": false 00:10:01.525 }, 00:10:01.525 "memory_domains": [ 00:10:01.525 { 00:10:01.525 "dma_device_id": "system", 00:10:01.525 "dma_device_type": 1 00:10:01.525 }, 00:10:01.525 { 00:10:01.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.525 "dma_device_type": 2 00:10:01.525 } 00:10:01.525 ], 00:10:01.525 "driver_specific": {} 00:10:01.525 } 00:10:01.525 ] 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.525 "name": "Existed_Raid", 00:10:01.525 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:01.525 "strip_size_kb": 64, 00:10:01.525 "state": "configuring", 00:10:01.525 "raid_level": "concat", 00:10:01.525 "superblock": true, 00:10:01.525 "num_base_bdevs": 3, 00:10:01.525 "num_base_bdevs_discovered": 2, 00:10:01.525 "num_base_bdevs_operational": 3, 00:10:01.525 "base_bdevs_list": [ 00:10:01.525 { 00:10:01.525 "name": "BaseBdev1", 00:10:01.525 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:01.525 "is_configured": true, 00:10:01.525 "data_offset": 2048, 00:10:01.525 "data_size": 63488 00:10:01.525 }, 00:10:01.525 { 00:10:01.525 "name": null, 00:10:01.525 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:01.525 "is_configured": false, 00:10:01.525 "data_offset": 0, 00:10:01.525 "data_size": 63488 00:10:01.525 }, 00:10:01.525 { 00:10:01.525 "name": "BaseBdev3", 00:10:01.525 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:01.525 "is_configured": true, 00:10:01.525 "data_offset": 2048, 00:10:01.525 "data_size": 63488 00:10:01.525 } 00:10:01.525 ] 00:10:01.525 }' 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.525 10:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.783 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.783 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.783 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.783 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.041 [2024-11-20 10:33:05.288430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.041 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.042 "name": "Existed_Raid", 00:10:02.042 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:02.042 "strip_size_kb": 64, 00:10:02.042 "state": "configuring", 00:10:02.042 "raid_level": "concat", 00:10:02.042 "superblock": true, 00:10:02.042 "num_base_bdevs": 3, 00:10:02.042 "num_base_bdevs_discovered": 1, 00:10:02.042 "num_base_bdevs_operational": 3, 00:10:02.042 "base_bdevs_list": [ 00:10:02.042 { 00:10:02.042 "name": "BaseBdev1", 00:10:02.042 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:02.042 "is_configured": true, 00:10:02.042 "data_offset": 2048, 00:10:02.042 "data_size": 63488 00:10:02.042 }, 00:10:02.042 { 00:10:02.042 "name": null, 00:10:02.042 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:02.042 "is_configured": false, 00:10:02.042 "data_offset": 0, 00:10:02.042 "data_size": 63488 00:10:02.042 }, 00:10:02.042 { 00:10:02.042 "name": null, 00:10:02.042 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:02.042 "is_configured": false, 00:10:02.042 "data_offset": 0, 00:10:02.042 "data_size": 63488 00:10:02.042 } 00:10:02.042 ] 00:10:02.042 }' 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.042 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.300 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.300 [2024-11-20 10:33:05.775658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.559 "name": "Existed_Raid", 00:10:02.559 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:02.559 "strip_size_kb": 64, 00:10:02.559 "state": "configuring", 00:10:02.559 "raid_level": "concat", 00:10:02.559 "superblock": true, 00:10:02.559 "num_base_bdevs": 3, 00:10:02.559 "num_base_bdevs_discovered": 2, 00:10:02.559 "num_base_bdevs_operational": 3, 00:10:02.559 "base_bdevs_list": [ 00:10:02.559 { 00:10:02.559 "name": "BaseBdev1", 00:10:02.559 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:02.559 "is_configured": true, 00:10:02.559 "data_offset": 2048, 00:10:02.559 "data_size": 63488 00:10:02.559 }, 00:10:02.559 { 00:10:02.559 "name": null, 00:10:02.559 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:02.559 "is_configured": false, 00:10:02.559 "data_offset": 0, 00:10:02.559 "data_size": 63488 00:10:02.559 }, 00:10:02.559 { 00:10:02.559 "name": "BaseBdev3", 00:10:02.559 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:02.559 "is_configured": true, 00:10:02.559 "data_offset": 2048, 00:10:02.559 "data_size": 63488 00:10:02.559 } 00:10:02.559 ] 00:10:02.559 }' 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.559 10:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 [2024-11-20 10:33:06.270883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.075 "name": "Existed_Raid", 00:10:03.075 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:03.075 "strip_size_kb": 64, 00:10:03.075 "state": "configuring", 00:10:03.075 "raid_level": "concat", 00:10:03.075 "superblock": true, 00:10:03.075 "num_base_bdevs": 3, 00:10:03.075 "num_base_bdevs_discovered": 1, 00:10:03.075 "num_base_bdevs_operational": 3, 00:10:03.075 "base_bdevs_list": [ 00:10:03.075 { 00:10:03.075 "name": null, 00:10:03.075 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:03.075 "is_configured": false, 00:10:03.075 "data_offset": 0, 00:10:03.075 "data_size": 63488 00:10:03.075 }, 00:10:03.075 { 00:10:03.075 "name": null, 00:10:03.075 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:03.075 "is_configured": false, 00:10:03.075 "data_offset": 0, 00:10:03.075 "data_size": 63488 00:10:03.075 }, 00:10:03.075 { 00:10:03.075 "name": "BaseBdev3", 00:10:03.075 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:03.075 "is_configured": true, 00:10:03.075 "data_offset": 2048, 00:10:03.075 "data_size": 63488 00:10:03.075 } 00:10:03.075 ] 00:10:03.075 }' 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.075 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 [2024-11-20 10:33:06.872977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.640 "name": "Existed_Raid", 00:10:03.640 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:03.640 "strip_size_kb": 64, 00:10:03.640 "state": "configuring", 00:10:03.640 "raid_level": "concat", 00:10:03.640 "superblock": true, 00:10:03.640 "num_base_bdevs": 3, 00:10:03.640 "num_base_bdevs_discovered": 2, 00:10:03.640 "num_base_bdevs_operational": 3, 00:10:03.640 "base_bdevs_list": [ 00:10:03.640 { 00:10:03.640 "name": null, 00:10:03.640 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:03.640 "is_configured": false, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 63488 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": "BaseBdev2", 00:10:03.640 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:03.640 "is_configured": true, 00:10:03.640 "data_offset": 2048, 00:10:03.640 "data_size": 63488 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": "BaseBdev3", 00:10:03.640 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:03.640 "is_configured": true, 00:10:03.640 "data_offset": 2048, 00:10:03.640 "data_size": 63488 00:10:03.640 } 00:10:03.640 ] 00:10:03.640 }' 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.640 10:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.898 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.898 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.898 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.898 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.898 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 45361244-3f87-437e-a79c-4f498436f0f3 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.156 [2024-11-20 10:33:07.478985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:04.156 [2024-11-20 10:33:07.479333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.156 [2024-11-20 10:33:07.479413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.156 [2024-11-20 10:33:07.479708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:04.156 [2024-11-20 10:33:07.479963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.156 [2024-11-20 10:33:07.480021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:04.156 NewBaseBdev 00:10:04.156 [2024-11-20 10:33:07.480268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.156 [ 00:10:04.156 { 00:10:04.156 "name": "NewBaseBdev", 00:10:04.156 "aliases": [ 00:10:04.156 "45361244-3f87-437e-a79c-4f498436f0f3" 00:10:04.156 ], 00:10:04.156 "product_name": "Malloc disk", 00:10:04.156 "block_size": 512, 00:10:04.156 "num_blocks": 65536, 00:10:04.156 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:04.156 "assigned_rate_limits": { 00:10:04.156 "rw_ios_per_sec": 0, 00:10:04.156 "rw_mbytes_per_sec": 0, 00:10:04.156 "r_mbytes_per_sec": 0, 00:10:04.156 "w_mbytes_per_sec": 0 00:10:04.156 }, 00:10:04.156 "claimed": true, 00:10:04.156 "claim_type": "exclusive_write", 00:10:04.156 "zoned": false, 00:10:04.156 "supported_io_types": { 00:10:04.156 "read": true, 00:10:04.156 "write": true, 00:10:04.156 "unmap": true, 00:10:04.156 "flush": true, 00:10:04.156 "reset": true, 00:10:04.156 "nvme_admin": false, 00:10:04.156 "nvme_io": false, 00:10:04.156 "nvme_io_md": false, 00:10:04.156 "write_zeroes": true, 00:10:04.156 "zcopy": true, 00:10:04.156 "get_zone_info": false, 00:10:04.156 "zone_management": false, 00:10:04.156 "zone_append": false, 00:10:04.156 "compare": false, 00:10:04.156 "compare_and_write": false, 00:10:04.156 "abort": true, 00:10:04.156 "seek_hole": false, 00:10:04.156 "seek_data": false, 00:10:04.156 "copy": true, 00:10:04.156 "nvme_iov_md": false 00:10:04.156 }, 00:10:04.156 "memory_domains": [ 00:10:04.156 { 00:10:04.156 "dma_device_id": "system", 00:10:04.156 "dma_device_type": 1 00:10:04.156 }, 00:10:04.156 { 00:10:04.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.156 "dma_device_type": 2 00:10:04.156 } 00:10:04.156 ], 00:10:04.156 "driver_specific": {} 00:10:04.156 } 00:10:04.156 ] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.156 "name": "Existed_Raid", 00:10:04.156 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:04.156 "strip_size_kb": 64, 00:10:04.156 "state": "online", 00:10:04.156 "raid_level": "concat", 00:10:04.156 "superblock": true, 00:10:04.156 "num_base_bdevs": 3, 00:10:04.156 "num_base_bdevs_discovered": 3, 00:10:04.156 "num_base_bdevs_operational": 3, 00:10:04.156 "base_bdevs_list": [ 00:10:04.156 { 00:10:04.156 "name": "NewBaseBdev", 00:10:04.156 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:04.156 "is_configured": true, 00:10:04.156 "data_offset": 2048, 00:10:04.156 "data_size": 63488 00:10:04.156 }, 00:10:04.156 { 00:10:04.156 "name": "BaseBdev2", 00:10:04.156 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:04.156 "is_configured": true, 00:10:04.156 "data_offset": 2048, 00:10:04.156 "data_size": 63488 00:10:04.156 }, 00:10:04.156 { 00:10:04.156 "name": "BaseBdev3", 00:10:04.156 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:04.156 "is_configured": true, 00:10:04.156 "data_offset": 2048, 00:10:04.156 "data_size": 63488 00:10:04.156 } 00:10:04.156 ] 00:10:04.156 }' 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.156 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 10:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 [2024-11-20 10:33:07.986515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.729 "name": "Existed_Raid", 00:10:04.729 "aliases": [ 00:10:04.729 "2fe72165-71be-4f8f-bf56-a81d6217ad61" 00:10:04.729 ], 00:10:04.729 "product_name": "Raid Volume", 00:10:04.729 "block_size": 512, 00:10:04.729 "num_blocks": 190464, 00:10:04.729 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:04.729 "assigned_rate_limits": { 00:10:04.729 "rw_ios_per_sec": 0, 00:10:04.729 "rw_mbytes_per_sec": 0, 00:10:04.729 "r_mbytes_per_sec": 0, 00:10:04.729 "w_mbytes_per_sec": 0 00:10:04.729 }, 00:10:04.729 "claimed": false, 00:10:04.729 "zoned": false, 00:10:04.729 "supported_io_types": { 00:10:04.729 "read": true, 00:10:04.729 "write": true, 00:10:04.729 "unmap": true, 00:10:04.729 "flush": true, 00:10:04.729 "reset": true, 00:10:04.729 "nvme_admin": false, 00:10:04.729 "nvme_io": false, 00:10:04.729 "nvme_io_md": false, 00:10:04.729 "write_zeroes": true, 00:10:04.729 "zcopy": false, 00:10:04.729 "get_zone_info": false, 00:10:04.729 "zone_management": false, 00:10:04.729 "zone_append": false, 00:10:04.729 "compare": false, 00:10:04.729 "compare_and_write": false, 00:10:04.729 "abort": false, 00:10:04.729 "seek_hole": false, 00:10:04.729 "seek_data": false, 00:10:04.729 "copy": false, 00:10:04.729 "nvme_iov_md": false 00:10:04.729 }, 00:10:04.729 "memory_domains": [ 00:10:04.729 { 00:10:04.729 "dma_device_id": "system", 00:10:04.729 "dma_device_type": 1 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.729 "dma_device_type": 2 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "dma_device_id": "system", 00:10:04.729 "dma_device_type": 1 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.729 "dma_device_type": 2 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "dma_device_id": "system", 00:10:04.729 "dma_device_type": 1 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.729 "dma_device_type": 2 00:10:04.729 } 00:10:04.729 ], 00:10:04.729 "driver_specific": { 00:10:04.729 "raid": { 00:10:04.729 "uuid": "2fe72165-71be-4f8f-bf56-a81d6217ad61", 00:10:04.729 "strip_size_kb": 64, 00:10:04.729 "state": "online", 00:10:04.729 "raid_level": "concat", 00:10:04.729 "superblock": true, 00:10:04.729 "num_base_bdevs": 3, 00:10:04.729 "num_base_bdevs_discovered": 3, 00:10:04.729 "num_base_bdevs_operational": 3, 00:10:04.729 "base_bdevs_list": [ 00:10:04.729 { 00:10:04.729 "name": "NewBaseBdev", 00:10:04.729 "uuid": "45361244-3f87-437e-a79c-4f498436f0f3", 00:10:04.729 "is_configured": true, 00:10:04.729 "data_offset": 2048, 00:10:04.729 "data_size": 63488 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "name": "BaseBdev2", 00:10:04.729 "uuid": "6b8a9b15-e9fb-44e4-ad6f-3c4df81ddd11", 00:10:04.729 "is_configured": true, 00:10:04.729 "data_offset": 2048, 00:10:04.729 "data_size": 63488 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "name": "BaseBdev3", 00:10:04.729 "uuid": "098c8a34-1906-4654-be83-99200e9b6470", 00:10:04.729 "is_configured": true, 00:10:04.729 "data_offset": 2048, 00:10:04.729 "data_size": 63488 00:10:04.729 } 00:10:04.729 ] 00:10:04.729 } 00:10:04.729 } 00:10:04.729 }' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.729 BaseBdev2 00:10:04.729 BaseBdev3' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.986 [2024-11-20 10:33:08.237798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.986 [2024-11-20 10:33:08.237838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.986 [2024-11-20 10:33:08.237939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.986 [2024-11-20 10:33:08.238001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.986 [2024-11-20 10:33:08.238015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66408 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66408 ']' 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66408 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66408 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66408' 00:10:04.986 killing process with pid 66408 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66408 00:10:04.986 [2024-11-20 10:33:08.287576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.986 10:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66408 00:10:05.242 [2024-11-20 10:33:08.597390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.622 10:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.622 00:10:06.622 real 0m10.900s 00:10:06.622 user 0m17.224s 00:10:06.622 sys 0m1.948s 00:10:06.622 10:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.622 10:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.622 ************************************ 00:10:06.622 END TEST raid_state_function_test_sb 00:10:06.622 ************************************ 00:10:06.622 10:33:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:06.622 10:33:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:06.622 10:33:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.622 10:33:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.622 ************************************ 00:10:06.622 START TEST raid_superblock_test 00:10:06.622 ************************************ 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:06.622 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67036 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67036 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67036 ']' 00:10:06.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.623 10:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.623 [2024-11-20 10:33:09.900663] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:06.623 [2024-11-20 10:33:09.900807] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67036 ] 00:10:06.623 [2024-11-20 10:33:10.056347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.881 [2024-11-20 10:33:10.175814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.140 [2024-11-20 10:33:10.371219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.140 [2024-11-20 10:33:10.371393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.399 malloc1 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.399 [2024-11-20 10:33:10.799427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.399 [2024-11-20 10:33:10.799498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.399 [2024-11-20 10:33:10.799526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:07.399 [2024-11-20 10:33:10.799537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.399 [2024-11-20 10:33:10.801700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.399 [2024-11-20 10:33:10.801745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.399 pt1 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.399 malloc2 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.399 [2024-11-20 10:33:10.854227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.399 [2024-11-20 10:33:10.854334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.399 [2024-11-20 10:33:10.854414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:07.399 [2024-11-20 10:33:10.854450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.399 [2024-11-20 10:33:10.856481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.399 [2024-11-20 10:33:10.856561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.399 pt2 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.399 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.400 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.659 malloc3 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.659 [2024-11-20 10:33:10.934954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:07.659 [2024-11-20 10:33:10.935089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.659 [2024-11-20 10:33:10.935153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:07.659 [2024-11-20 10:33:10.935197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.659 [2024-11-20 10:33:10.937643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.659 [2024-11-20 10:33:10.937730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:07.659 pt3 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.659 [2024-11-20 10:33:10.950970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.659 [2024-11-20 10:33:10.952955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.659 [2024-11-20 10:33:10.953079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:07.659 [2024-11-20 10:33:10.953280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:07.659 [2024-11-20 10:33:10.953336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:07.659 [2024-11-20 10:33:10.953649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.659 [2024-11-20 10:33:10.953889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:07.659 [2024-11-20 10:33:10.953907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:07.659 [2024-11-20 10:33:10.954091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.659 10:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.659 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.659 "name": "raid_bdev1", 00:10:07.659 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:07.659 "strip_size_kb": 64, 00:10:07.659 "state": "online", 00:10:07.659 "raid_level": "concat", 00:10:07.659 "superblock": true, 00:10:07.659 "num_base_bdevs": 3, 00:10:07.659 "num_base_bdevs_discovered": 3, 00:10:07.659 "num_base_bdevs_operational": 3, 00:10:07.659 "base_bdevs_list": [ 00:10:07.659 { 00:10:07.659 "name": "pt1", 00:10:07.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.659 "is_configured": true, 00:10:07.659 "data_offset": 2048, 00:10:07.659 "data_size": 63488 00:10:07.659 }, 00:10:07.659 { 00:10:07.659 "name": "pt2", 00:10:07.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.659 "is_configured": true, 00:10:07.659 "data_offset": 2048, 00:10:07.659 "data_size": 63488 00:10:07.659 }, 00:10:07.659 { 00:10:07.659 "name": "pt3", 00:10:07.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.659 "is_configured": true, 00:10:07.659 "data_offset": 2048, 00:10:07.659 "data_size": 63488 00:10:07.659 } 00:10:07.659 ] 00:10:07.659 }' 00:10:07.659 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.659 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.227 [2024-11-20 10:33:11.458479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.227 "name": "raid_bdev1", 00:10:08.227 "aliases": [ 00:10:08.227 "02e93c53-561e-4e4e-8597-3ca304100614" 00:10:08.227 ], 00:10:08.227 "product_name": "Raid Volume", 00:10:08.227 "block_size": 512, 00:10:08.227 "num_blocks": 190464, 00:10:08.227 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:08.227 "assigned_rate_limits": { 00:10:08.227 "rw_ios_per_sec": 0, 00:10:08.227 "rw_mbytes_per_sec": 0, 00:10:08.227 "r_mbytes_per_sec": 0, 00:10:08.227 "w_mbytes_per_sec": 0 00:10:08.227 }, 00:10:08.227 "claimed": false, 00:10:08.227 "zoned": false, 00:10:08.227 "supported_io_types": { 00:10:08.227 "read": true, 00:10:08.227 "write": true, 00:10:08.227 "unmap": true, 00:10:08.227 "flush": true, 00:10:08.227 "reset": true, 00:10:08.227 "nvme_admin": false, 00:10:08.227 "nvme_io": false, 00:10:08.227 "nvme_io_md": false, 00:10:08.227 "write_zeroes": true, 00:10:08.227 "zcopy": false, 00:10:08.227 "get_zone_info": false, 00:10:08.227 "zone_management": false, 00:10:08.227 "zone_append": false, 00:10:08.227 "compare": false, 00:10:08.227 "compare_and_write": false, 00:10:08.227 "abort": false, 00:10:08.227 "seek_hole": false, 00:10:08.227 "seek_data": false, 00:10:08.227 "copy": false, 00:10:08.227 "nvme_iov_md": false 00:10:08.227 }, 00:10:08.227 "memory_domains": [ 00:10:08.227 { 00:10:08.227 "dma_device_id": "system", 00:10:08.227 "dma_device_type": 1 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.227 "dma_device_type": 2 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "dma_device_id": "system", 00:10:08.227 "dma_device_type": 1 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.227 "dma_device_type": 2 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "dma_device_id": "system", 00:10:08.227 "dma_device_type": 1 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.227 "dma_device_type": 2 00:10:08.227 } 00:10:08.227 ], 00:10:08.227 "driver_specific": { 00:10:08.227 "raid": { 00:10:08.227 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:08.227 "strip_size_kb": 64, 00:10:08.227 "state": "online", 00:10:08.227 "raid_level": "concat", 00:10:08.227 "superblock": true, 00:10:08.227 "num_base_bdevs": 3, 00:10:08.227 "num_base_bdevs_discovered": 3, 00:10:08.227 "num_base_bdevs_operational": 3, 00:10:08.227 "base_bdevs_list": [ 00:10:08.227 { 00:10:08.227 "name": "pt1", 00:10:08.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.227 "is_configured": true, 00:10:08.227 "data_offset": 2048, 00:10:08.227 "data_size": 63488 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "name": "pt2", 00:10:08.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.227 "is_configured": true, 00:10:08.227 "data_offset": 2048, 00:10:08.227 "data_size": 63488 00:10:08.227 }, 00:10:08.227 { 00:10:08.227 "name": "pt3", 00:10:08.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.227 "is_configured": true, 00:10:08.227 "data_offset": 2048, 00:10:08.227 "data_size": 63488 00:10:08.227 } 00:10:08.227 ] 00:10:08.227 } 00:10:08.227 } 00:10:08.227 }' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:08.227 pt2 00:10:08.227 pt3' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.227 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.228 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.228 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:08.228 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.228 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.228 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 [2024-11-20 10:33:11.725959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=02e93c53-561e-4e4e-8597-3ca304100614 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 02e93c53-561e-4e4e-8597-3ca304100614 ']' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 [2024-11-20 10:33:11.769607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.487 [2024-11-20 10:33:11.769698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.487 [2024-11-20 10:33:11.769841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.487 [2024-11-20 10:33:11.769948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.487 [2024-11-20 10:33:11.770008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 [2024-11-20 10:33:11.921472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:08.487 [2024-11-20 10:33:11.923393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:08.487 [2024-11-20 10:33:11.923464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:08.487 [2024-11-20 10:33:11.923524] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:08.487 [2024-11-20 10:33:11.923586] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:08.487 [2024-11-20 10:33:11.923609] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:08.487 [2024-11-20 10:33:11.923629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.487 [2024-11-20 10:33:11.923641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:08.487 request: 00:10:08.487 { 00:10:08.487 "name": "raid_bdev1", 00:10:08.487 "raid_level": "concat", 00:10:08.487 "base_bdevs": [ 00:10:08.487 "malloc1", 00:10:08.487 "malloc2", 00:10:08.487 "malloc3" 00:10:08.487 ], 00:10:08.487 "strip_size_kb": 64, 00:10:08.487 "superblock": false, 00:10:08.487 "method": "bdev_raid_create", 00:10:08.487 "req_id": 1 00:10:08.487 } 00:10:08.487 Got JSON-RPC error response 00:10:08.487 response: 00:10:08.487 { 00:10:08.487 "code": -17, 00:10:08.487 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:08.487 } 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.487 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.488 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.488 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.488 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:08.488 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.748 [2024-11-20 10:33:11.985239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:08.748 [2024-11-20 10:33:11.985348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.748 [2024-11-20 10:33:11.985402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:08.748 [2024-11-20 10:33:11.985447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.748 [2024-11-20 10:33:11.987673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.748 [2024-11-20 10:33:11.987767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:08.748 [2024-11-20 10:33:11.987890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:08.748 [2024-11-20 10:33:11.987985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:08.748 pt1 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.748 10:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.748 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.748 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.748 "name": "raid_bdev1", 00:10:08.748 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:08.748 "strip_size_kb": 64, 00:10:08.748 "state": "configuring", 00:10:08.748 "raid_level": "concat", 00:10:08.748 "superblock": true, 00:10:08.748 "num_base_bdevs": 3, 00:10:08.748 "num_base_bdevs_discovered": 1, 00:10:08.748 "num_base_bdevs_operational": 3, 00:10:08.748 "base_bdevs_list": [ 00:10:08.748 { 00:10:08.748 "name": "pt1", 00:10:08.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.748 "is_configured": true, 00:10:08.748 "data_offset": 2048, 00:10:08.748 "data_size": 63488 00:10:08.748 }, 00:10:08.748 { 00:10:08.748 "name": null, 00:10:08.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.748 "is_configured": false, 00:10:08.748 "data_offset": 2048, 00:10:08.748 "data_size": 63488 00:10:08.748 }, 00:10:08.748 { 00:10:08.748 "name": null, 00:10:08.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.748 "is_configured": false, 00:10:08.748 "data_offset": 2048, 00:10:08.748 "data_size": 63488 00:10:08.748 } 00:10:08.748 ] 00:10:08.748 }' 00:10:08.748 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.748 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.008 [2024-11-20 10:33:12.384608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.008 [2024-11-20 10:33:12.384752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.008 [2024-11-20 10:33:12.384784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:09.008 [2024-11-20 10:33:12.384796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.008 [2024-11-20 10:33:12.385287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.008 [2024-11-20 10:33:12.385317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.008 [2024-11-20 10:33:12.385429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:09.008 [2024-11-20 10:33:12.385457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.008 pt2 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.008 [2024-11-20 10:33:12.396587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.008 "name": "raid_bdev1", 00:10:09.008 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:09.008 "strip_size_kb": 64, 00:10:09.008 "state": "configuring", 00:10:09.008 "raid_level": "concat", 00:10:09.008 "superblock": true, 00:10:09.008 "num_base_bdevs": 3, 00:10:09.008 "num_base_bdevs_discovered": 1, 00:10:09.008 "num_base_bdevs_operational": 3, 00:10:09.008 "base_bdevs_list": [ 00:10:09.008 { 00:10:09.008 "name": "pt1", 00:10:09.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.008 "is_configured": true, 00:10:09.008 "data_offset": 2048, 00:10:09.008 "data_size": 63488 00:10:09.008 }, 00:10:09.008 { 00:10:09.008 "name": null, 00:10:09.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.008 "is_configured": false, 00:10:09.008 "data_offset": 0, 00:10:09.008 "data_size": 63488 00:10:09.008 }, 00:10:09.008 { 00:10:09.008 "name": null, 00:10:09.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.008 "is_configured": false, 00:10:09.008 "data_offset": 2048, 00:10:09.008 "data_size": 63488 00:10:09.008 } 00:10:09.008 ] 00:10:09.008 }' 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.008 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.577 [2024-11-20 10:33:12.887874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.577 [2024-11-20 10:33:12.888019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.577 [2024-11-20 10:33:12.888066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:09.577 [2024-11-20 10:33:12.888117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.577 [2024-11-20 10:33:12.888702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.577 [2024-11-20 10:33:12.888791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.577 [2024-11-20 10:33:12.888936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:09.577 [2024-11-20 10:33:12.889011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.577 pt2 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.577 [2024-11-20 10:33:12.895828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:09.577 [2024-11-20 10:33:12.895934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.577 [2024-11-20 10:33:12.895984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:09.577 [2024-11-20 10:33:12.896025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.577 [2024-11-20 10:33:12.896507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.577 [2024-11-20 10:33:12.896591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:09.577 [2024-11-20 10:33:12.896700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:09.577 [2024-11-20 10:33:12.896764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:09.577 [2024-11-20 10:33:12.896930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.577 [2024-11-20 10:33:12.896991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:09.577 [2024-11-20 10:33:12.897266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:09.577 [2024-11-20 10:33:12.897488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.577 [2024-11-20 10:33:12.897534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:09.577 [2024-11-20 10:33:12.897761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.577 pt3 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.577 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.578 "name": "raid_bdev1", 00:10:09.578 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:09.578 "strip_size_kb": 64, 00:10:09.578 "state": "online", 00:10:09.578 "raid_level": "concat", 00:10:09.578 "superblock": true, 00:10:09.578 "num_base_bdevs": 3, 00:10:09.578 "num_base_bdevs_discovered": 3, 00:10:09.578 "num_base_bdevs_operational": 3, 00:10:09.578 "base_bdevs_list": [ 00:10:09.578 { 00:10:09.578 "name": "pt1", 00:10:09.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.578 "is_configured": true, 00:10:09.578 "data_offset": 2048, 00:10:09.578 "data_size": 63488 00:10:09.578 }, 00:10:09.578 { 00:10:09.578 "name": "pt2", 00:10:09.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.578 "is_configured": true, 00:10:09.578 "data_offset": 2048, 00:10:09.578 "data_size": 63488 00:10:09.578 }, 00:10:09.578 { 00:10:09.578 "name": "pt3", 00:10:09.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.578 "is_configured": true, 00:10:09.578 "data_offset": 2048, 00:10:09.578 "data_size": 63488 00:10:09.578 } 00:10:09.578 ] 00:10:09.578 }' 00:10:09.578 10:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.578 10:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.837 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.837 [2024-11-20 10:33:13.299544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.096 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.096 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.096 "name": "raid_bdev1", 00:10:10.096 "aliases": [ 00:10:10.096 "02e93c53-561e-4e4e-8597-3ca304100614" 00:10:10.096 ], 00:10:10.096 "product_name": "Raid Volume", 00:10:10.096 "block_size": 512, 00:10:10.096 "num_blocks": 190464, 00:10:10.096 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:10.096 "assigned_rate_limits": { 00:10:10.096 "rw_ios_per_sec": 0, 00:10:10.096 "rw_mbytes_per_sec": 0, 00:10:10.096 "r_mbytes_per_sec": 0, 00:10:10.096 "w_mbytes_per_sec": 0 00:10:10.097 }, 00:10:10.097 "claimed": false, 00:10:10.097 "zoned": false, 00:10:10.097 "supported_io_types": { 00:10:10.097 "read": true, 00:10:10.097 "write": true, 00:10:10.097 "unmap": true, 00:10:10.097 "flush": true, 00:10:10.097 "reset": true, 00:10:10.097 "nvme_admin": false, 00:10:10.097 "nvme_io": false, 00:10:10.097 "nvme_io_md": false, 00:10:10.097 "write_zeroes": true, 00:10:10.097 "zcopy": false, 00:10:10.097 "get_zone_info": false, 00:10:10.097 "zone_management": false, 00:10:10.097 "zone_append": false, 00:10:10.097 "compare": false, 00:10:10.097 "compare_and_write": false, 00:10:10.097 "abort": false, 00:10:10.097 "seek_hole": false, 00:10:10.097 "seek_data": false, 00:10:10.097 "copy": false, 00:10:10.097 "nvme_iov_md": false 00:10:10.097 }, 00:10:10.097 "memory_domains": [ 00:10:10.097 { 00:10:10.097 "dma_device_id": "system", 00:10:10.097 "dma_device_type": 1 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.097 "dma_device_type": 2 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "dma_device_id": "system", 00:10:10.097 "dma_device_type": 1 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.097 "dma_device_type": 2 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "dma_device_id": "system", 00:10:10.097 "dma_device_type": 1 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.097 "dma_device_type": 2 00:10:10.097 } 00:10:10.097 ], 00:10:10.097 "driver_specific": { 00:10:10.097 "raid": { 00:10:10.097 "uuid": "02e93c53-561e-4e4e-8597-3ca304100614", 00:10:10.097 "strip_size_kb": 64, 00:10:10.097 "state": "online", 00:10:10.097 "raid_level": "concat", 00:10:10.097 "superblock": true, 00:10:10.097 "num_base_bdevs": 3, 00:10:10.097 "num_base_bdevs_discovered": 3, 00:10:10.097 "num_base_bdevs_operational": 3, 00:10:10.097 "base_bdevs_list": [ 00:10:10.097 { 00:10:10.097 "name": "pt1", 00:10:10.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.097 "is_configured": true, 00:10:10.097 "data_offset": 2048, 00:10:10.097 "data_size": 63488 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "name": "pt2", 00:10:10.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.097 "is_configured": true, 00:10:10.097 "data_offset": 2048, 00:10:10.097 "data_size": 63488 00:10:10.097 }, 00:10:10.097 { 00:10:10.097 "name": "pt3", 00:10:10.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.097 "is_configured": true, 00:10:10.097 "data_offset": 2048, 00:10:10.097 "data_size": 63488 00:10:10.097 } 00:10:10.097 ] 00:10:10.097 } 00:10:10.097 } 00:10:10.097 }' 00:10:10.097 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.097 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.097 pt2 00:10:10.097 pt3' 00:10:10.097 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.097 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.098 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.359 [2024-11-20 10:33:13.590983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 02e93c53-561e-4e4e-8597-3ca304100614 '!=' 02e93c53-561e-4e4e-8597-3ca304100614 ']' 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67036 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67036 ']' 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67036 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:10.359 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.360 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67036 00:10:10.360 killing process with pid 67036 00:10:10.360 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.360 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.360 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67036' 00:10:10.360 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67036 00:10:10.360 [2024-11-20 10:33:13.651393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.360 [2024-11-20 10:33:13.651503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.360 10:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67036 00:10:10.360 [2024-11-20 10:33:13.651568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.360 [2024-11-20 10:33:13.651582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:10.618 [2024-11-20 10:33:13.967578] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.996 10:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:11.996 00:10:11.996 real 0m5.321s 00:10:11.996 user 0m7.600s 00:10:11.996 sys 0m0.915s 00:10:11.996 10:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.996 10:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.996 ************************************ 00:10:11.996 END TEST raid_superblock_test 00:10:11.996 ************************************ 00:10:11.996 10:33:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:11.996 10:33:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.996 10:33:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.996 10:33:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.996 ************************************ 00:10:11.996 START TEST raid_read_error_test 00:10:11.996 ************************************ 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TpQLprl5Mv 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67287 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67287 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67287 ']' 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.996 10:33:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.996 [2024-11-20 10:33:15.318024] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:11.996 [2024-11-20 10:33:15.318224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67287 ] 00:10:12.255 [2024-11-20 10:33:15.496872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.255 [2024-11-20 10:33:15.621378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.514 [2024-11-20 10:33:15.832167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.514 [2024-11-20 10:33:15.832214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.774 BaseBdev1_malloc 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.774 true 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.774 [2024-11-20 10:33:16.201916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:12.774 [2024-11-20 10:33:16.202046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.774 [2024-11-20 10:33:16.202090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:12.774 [2024-11-20 10:33:16.202128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.774 [2024-11-20 10:33:16.204284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.774 [2024-11-20 10:33:16.204387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.774 BaseBdev1 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.774 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.774 BaseBdev2_malloc 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 true 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 [2024-11-20 10:33:16.269530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.034 [2024-11-20 10:33:16.269596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.034 [2024-11-20 10:33:16.269617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:13.034 [2024-11-20 10:33:16.269632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.034 [2024-11-20 10:33:16.271956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.034 [2024-11-20 10:33:16.272011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.034 BaseBdev2 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 BaseBdev3_malloc 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 true 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 [2024-11-20 10:33:16.349575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:13.034 [2024-11-20 10:33:16.349734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.034 [2024-11-20 10:33:16.349797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:13.034 [2024-11-20 10:33:16.349863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.034 [2024-11-20 10:33:16.352912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.034 [2024-11-20 10:33:16.353068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:13.034 BaseBdev3 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 [2024-11-20 10:33:16.357897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.034 [2024-11-20 10:33:16.360579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.034 [2024-11-20 10:33:16.360781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.034 [2024-11-20 10:33:16.361155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.034 [2024-11-20 10:33:16.361236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:13.034 [2024-11-20 10:33:16.361679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:13.034 [2024-11-20 10:33:16.361976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.034 [2024-11-20 10:33:16.362009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:13.034 [2024-11-20 10:33:16.362287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.034 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.034 "name": "raid_bdev1", 00:10:13.034 "uuid": "d49d64b7-91e9-4af5-ac5e-5d75b05f3a21", 00:10:13.034 "strip_size_kb": 64, 00:10:13.034 "state": "online", 00:10:13.034 "raid_level": "concat", 00:10:13.034 "superblock": true, 00:10:13.034 "num_base_bdevs": 3, 00:10:13.034 "num_base_bdevs_discovered": 3, 00:10:13.034 "num_base_bdevs_operational": 3, 00:10:13.034 "base_bdevs_list": [ 00:10:13.034 { 00:10:13.034 "name": "BaseBdev1", 00:10:13.034 "uuid": "e60de36e-b88f-5cf5-b5e7-a17f3e29d18f", 00:10:13.034 "is_configured": true, 00:10:13.035 "data_offset": 2048, 00:10:13.035 "data_size": 63488 00:10:13.035 }, 00:10:13.035 { 00:10:13.035 "name": "BaseBdev2", 00:10:13.035 "uuid": "83e531a4-63c5-5b10-9b2b-919f4cfc1969", 00:10:13.035 "is_configured": true, 00:10:13.035 "data_offset": 2048, 00:10:13.035 "data_size": 63488 00:10:13.035 }, 00:10:13.035 { 00:10:13.035 "name": "BaseBdev3", 00:10:13.035 "uuid": "0ceb7d70-f7cf-5122-b912-170a7cf3d91f", 00:10:13.035 "is_configured": true, 00:10:13.035 "data_offset": 2048, 00:10:13.035 "data_size": 63488 00:10:13.035 } 00:10:13.035 ] 00:10:13.035 }' 00:10:13.035 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.035 10:33:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.602 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:13.602 10:33:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:13.602 [2024-11-20 10:33:16.918651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.538 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.539 "name": "raid_bdev1", 00:10:14.539 "uuid": "d49d64b7-91e9-4af5-ac5e-5d75b05f3a21", 00:10:14.539 "strip_size_kb": 64, 00:10:14.539 "state": "online", 00:10:14.539 "raid_level": "concat", 00:10:14.539 "superblock": true, 00:10:14.539 "num_base_bdevs": 3, 00:10:14.539 "num_base_bdevs_discovered": 3, 00:10:14.539 "num_base_bdevs_operational": 3, 00:10:14.539 "base_bdevs_list": [ 00:10:14.539 { 00:10:14.539 "name": "BaseBdev1", 00:10:14.539 "uuid": "e60de36e-b88f-5cf5-b5e7-a17f3e29d18f", 00:10:14.539 "is_configured": true, 00:10:14.539 "data_offset": 2048, 00:10:14.539 "data_size": 63488 00:10:14.539 }, 00:10:14.539 { 00:10:14.539 "name": "BaseBdev2", 00:10:14.539 "uuid": "83e531a4-63c5-5b10-9b2b-919f4cfc1969", 00:10:14.539 "is_configured": true, 00:10:14.539 "data_offset": 2048, 00:10:14.539 "data_size": 63488 00:10:14.539 }, 00:10:14.539 { 00:10:14.539 "name": "BaseBdev3", 00:10:14.539 "uuid": "0ceb7d70-f7cf-5122-b912-170a7cf3d91f", 00:10:14.539 "is_configured": true, 00:10:14.539 "data_offset": 2048, 00:10:14.539 "data_size": 63488 00:10:14.539 } 00:10:14.539 ] 00:10:14.539 }' 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.539 10:33:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.107 10:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.107 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.107 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.107 [2024-11-20 10:33:18.299399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.107 [2024-11-20 10:33:18.299503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.107 [2024-11-20 10:33:18.302327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.107 [2024-11-20 10:33:18.302460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.107 [2024-11-20 10:33:18.302530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.107 [2024-11-20 10:33:18.302592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:15.107 { 00:10:15.108 "results": [ 00:10:15.108 { 00:10:15.108 "job": "raid_bdev1", 00:10:15.108 "core_mask": "0x1", 00:10:15.108 "workload": "randrw", 00:10:15.108 "percentage": 50, 00:10:15.108 "status": "finished", 00:10:15.108 "queue_depth": 1, 00:10:15.108 "io_size": 131072, 00:10:15.108 "runtime": 1.381591, 00:10:15.108 "iops": 14663.529221021272, 00:10:15.108 "mibps": 1832.941152627659, 00:10:15.108 "io_failed": 1, 00:10:15.108 "io_timeout": 0, 00:10:15.108 "avg_latency_us": 94.5506011371817, 00:10:15.108 "min_latency_us": 27.83580786026201, 00:10:15.108 "max_latency_us": 1452.380786026201 00:10:15.108 } 00:10:15.108 ], 00:10:15.108 "core_count": 1 00:10:15.108 } 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67287 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67287 ']' 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67287 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67287 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.108 killing process with pid 67287 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67287' 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67287 00:10:15.108 [2024-11-20 10:33:18.345518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.108 10:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67287 00:10:15.367 [2024-11-20 10:33:18.589786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TpQLprl5Mv 00:10:16.757 ************************************ 00:10:16.757 END TEST raid_read_error_test 00:10:16.757 ************************************ 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:16.757 00:10:16.757 real 0m4.683s 00:10:16.757 user 0m5.550s 00:10:16.757 sys 0m0.565s 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.757 10:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.757 10:33:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:16.757 10:33:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:16.757 10:33:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.757 10:33:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.757 ************************************ 00:10:16.757 START TEST raid_write_error_test 00:10:16.757 ************************************ 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:16.757 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rR9MesvQaK 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67432 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67432 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67432 ']' 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.758 10:33:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.758 [2024-11-20 10:33:20.031279] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:16.758 [2024-11-20 10:33:20.031836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67432 ] 00:10:16.758 [2024-11-20 10:33:20.211472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.017 [2024-11-20 10:33:20.334041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.277 [2024-11-20 10:33:20.537968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.277 [2024-11-20 10:33:20.538044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.535 BaseBdev1_malloc 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.535 true 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.535 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.535 [2024-11-20 10:33:20.958835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:17.536 [2024-11-20 10:33:20.958902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.536 [2024-11-20 10:33:20.958926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:17.536 [2024-11-20 10:33:20.958939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.536 [2024-11-20 10:33:20.961174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.536 [2024-11-20 10:33:20.961302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:17.536 BaseBdev1 00:10:17.536 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 10:33:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.536 10:33:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:17.536 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 10:33:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 BaseBdev2_malloc 00:10:17.536 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:17.536 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.797 true 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.797 [2024-11-20 10:33:21.021483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:17.797 [2024-11-20 10:33:21.021552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.797 [2024-11-20 10:33:21.021573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:17.797 [2024-11-20 10:33:21.021588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.797 [2024-11-20 10:33:21.023931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.797 [2024-11-20 10:33:21.023982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:17.797 BaseBdev2 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.797 BaseBdev3_malloc 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.797 true 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.797 [2024-11-20 10:33:21.103490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:17.797 [2024-11-20 10:33:21.103564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.797 [2024-11-20 10:33:21.103590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:17.797 [2024-11-20 10:33:21.103606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.797 [2024-11-20 10:33:21.106097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.797 [2024-11-20 10:33:21.106146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:17.797 BaseBdev3 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.797 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.797 [2024-11-20 10:33:21.111552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.797 [2024-11-20 10:33:21.113607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.797 [2024-11-20 10:33:21.113721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.797 [2024-11-20 10:33:21.113954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.797 [2024-11-20 10:33:21.113969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:17.797 [2024-11-20 10:33:21.114264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:17.797 [2024-11-20 10:33:21.114482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.797 [2024-11-20 10:33:21.114508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:17.798 [2024-11-20 10:33:21.114694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.798 "name": "raid_bdev1", 00:10:17.798 "uuid": "c0909be7-c407-4a1c-9771-0ae934969463", 00:10:17.798 "strip_size_kb": 64, 00:10:17.798 "state": "online", 00:10:17.798 "raid_level": "concat", 00:10:17.798 "superblock": true, 00:10:17.798 "num_base_bdevs": 3, 00:10:17.798 "num_base_bdevs_discovered": 3, 00:10:17.798 "num_base_bdevs_operational": 3, 00:10:17.798 "base_bdevs_list": [ 00:10:17.798 { 00:10:17.798 "name": "BaseBdev1", 00:10:17.798 "uuid": "fbc52540-c09a-5b24-8be0-80628a3b7a1f", 00:10:17.798 "is_configured": true, 00:10:17.798 "data_offset": 2048, 00:10:17.798 "data_size": 63488 00:10:17.798 }, 00:10:17.798 { 00:10:17.798 "name": "BaseBdev2", 00:10:17.798 "uuid": "bf6f80ca-50d7-5a03-ba0b-65415e317cc2", 00:10:17.798 "is_configured": true, 00:10:17.798 "data_offset": 2048, 00:10:17.798 "data_size": 63488 00:10:17.798 }, 00:10:17.798 { 00:10:17.798 "name": "BaseBdev3", 00:10:17.798 "uuid": "8e8f718f-98cb-543a-8091-82ce31cc545b", 00:10:17.798 "is_configured": true, 00:10:17.798 "data_offset": 2048, 00:10:17.798 "data_size": 63488 00:10:17.798 } 00:10:17.798 ] 00:10:17.798 }' 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.798 10:33:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.367 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:18.367 10:33:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:18.367 [2024-11-20 10:33:21.719875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.304 "name": "raid_bdev1", 00:10:19.304 "uuid": "c0909be7-c407-4a1c-9771-0ae934969463", 00:10:19.304 "strip_size_kb": 64, 00:10:19.304 "state": "online", 00:10:19.304 "raid_level": "concat", 00:10:19.304 "superblock": true, 00:10:19.304 "num_base_bdevs": 3, 00:10:19.304 "num_base_bdevs_discovered": 3, 00:10:19.304 "num_base_bdevs_operational": 3, 00:10:19.304 "base_bdevs_list": [ 00:10:19.304 { 00:10:19.304 "name": "BaseBdev1", 00:10:19.304 "uuid": "fbc52540-c09a-5b24-8be0-80628a3b7a1f", 00:10:19.304 "is_configured": true, 00:10:19.304 "data_offset": 2048, 00:10:19.304 "data_size": 63488 00:10:19.304 }, 00:10:19.304 { 00:10:19.304 "name": "BaseBdev2", 00:10:19.304 "uuid": "bf6f80ca-50d7-5a03-ba0b-65415e317cc2", 00:10:19.304 "is_configured": true, 00:10:19.304 "data_offset": 2048, 00:10:19.304 "data_size": 63488 00:10:19.304 }, 00:10:19.304 { 00:10:19.304 "name": "BaseBdev3", 00:10:19.304 "uuid": "8e8f718f-98cb-543a-8091-82ce31cc545b", 00:10:19.304 "is_configured": true, 00:10:19.304 "data_offset": 2048, 00:10:19.304 "data_size": 63488 00:10:19.304 } 00:10:19.304 ] 00:10:19.304 }' 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.304 10:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.873 [2024-11-20 10:33:23.116668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.873 [2024-11-20 10:33:23.116774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.873 [2024-11-20 10:33:23.120021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.873 [2024-11-20 10:33:23.120131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.873 [2024-11-20 10:33:23.120216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.873 [2024-11-20 10:33:23.120283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:19.873 { 00:10:19.873 "results": [ 00:10:19.873 { 00:10:19.873 "job": "raid_bdev1", 00:10:19.873 "core_mask": "0x1", 00:10:19.873 "workload": "randrw", 00:10:19.873 "percentage": 50, 00:10:19.873 "status": "finished", 00:10:19.873 "queue_depth": 1, 00:10:19.873 "io_size": 131072, 00:10:19.873 "runtime": 1.397574, 00:10:19.873 "iops": 13935.57693546102, 00:10:19.873 "mibps": 1741.9471169326275, 00:10:19.873 "io_failed": 1, 00:10:19.873 "io_timeout": 0, 00:10:19.873 "avg_latency_us": 99.4789630048475, 00:10:19.873 "min_latency_us": 27.94759825327511, 00:10:19.873 "max_latency_us": 1645.5545851528384 00:10:19.873 } 00:10:19.873 ], 00:10:19.873 "core_count": 1 00:10:19.873 } 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67432 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67432 ']' 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67432 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67432 00:10:19.873 killing process with pid 67432 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67432' 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67432 00:10:19.873 [2024-11-20 10:33:23.168806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.873 10:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67432 00:10:20.133 [2024-11-20 10:33:23.416104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rR9MesvQaK 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.513 ************************************ 00:10:21.513 END TEST raid_write_error_test 00:10:21.513 ************************************ 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:21.513 00:10:21.513 real 0m4.732s 00:10:21.513 user 0m5.697s 00:10:21.513 sys 0m0.591s 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.513 10:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 10:33:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:21.513 10:33:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:21.513 10:33:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.513 10:33:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.513 10:33:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 ************************************ 00:10:21.513 START TEST raid_state_function_test 00:10:21.513 ************************************ 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67576 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67576' 00:10:21.513 Process raid pid: 67576 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67576 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67576 ']' 00:10:21.513 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.514 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.514 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.514 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.514 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.514 [2024-11-20 10:33:24.822915] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:21.514 [2024-11-20 10:33:24.823942] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.773 [2024-11-20 10:33:25.025903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.773 [2024-11-20 10:33:25.157727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.032 [2024-11-20 10:33:25.381848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.032 [2024-11-20 10:33:25.382097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.291 [2024-11-20 10:33:25.693984] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.291 [2024-11-20 10:33:25.694057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.291 [2024-11-20 10:33:25.694071] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.291 [2024-11-20 10:33:25.694086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.291 [2024-11-20 10:33:25.694095] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.291 [2024-11-20 10:33:25.694108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.291 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.292 "name": "Existed_Raid", 00:10:22.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.292 "strip_size_kb": 0, 00:10:22.292 "state": "configuring", 00:10:22.292 "raid_level": "raid1", 00:10:22.292 "superblock": false, 00:10:22.292 "num_base_bdevs": 3, 00:10:22.292 "num_base_bdevs_discovered": 0, 00:10:22.292 "num_base_bdevs_operational": 3, 00:10:22.292 "base_bdevs_list": [ 00:10:22.292 { 00:10:22.292 "name": "BaseBdev1", 00:10:22.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.292 "is_configured": false, 00:10:22.292 "data_offset": 0, 00:10:22.292 "data_size": 0 00:10:22.292 }, 00:10:22.292 { 00:10:22.292 "name": "BaseBdev2", 00:10:22.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.292 "is_configured": false, 00:10:22.292 "data_offset": 0, 00:10:22.292 "data_size": 0 00:10:22.292 }, 00:10:22.292 { 00:10:22.292 "name": "BaseBdev3", 00:10:22.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.292 "is_configured": false, 00:10:22.292 "data_offset": 0, 00:10:22.292 "data_size": 0 00:10:22.292 } 00:10:22.292 ] 00:10:22.292 }' 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.292 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 [2024-11-20 10:33:26.141234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.862 [2024-11-20 10:33:26.141374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 [2024-11-20 10:33:26.153182] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.862 [2024-11-20 10:33:26.153289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.862 [2024-11-20 10:33:26.153325] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.862 [2024-11-20 10:33:26.153365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.862 [2024-11-20 10:33:26.153407] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.862 [2024-11-20 10:33:26.153439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 [2024-11-20 10:33:26.206622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.862 BaseBdev1 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 [ 00:10:22.862 { 00:10:22.862 "name": "BaseBdev1", 00:10:22.862 "aliases": [ 00:10:22.862 "f27a9edf-e47b-48d5-b8ce-2cde380ee72b" 00:10:22.862 ], 00:10:22.862 "product_name": "Malloc disk", 00:10:22.862 "block_size": 512, 00:10:22.862 "num_blocks": 65536, 00:10:22.862 "uuid": "f27a9edf-e47b-48d5-b8ce-2cde380ee72b", 00:10:22.862 "assigned_rate_limits": { 00:10:22.862 "rw_ios_per_sec": 0, 00:10:22.862 "rw_mbytes_per_sec": 0, 00:10:22.862 "r_mbytes_per_sec": 0, 00:10:22.862 "w_mbytes_per_sec": 0 00:10:22.862 }, 00:10:22.862 "claimed": true, 00:10:22.862 "claim_type": "exclusive_write", 00:10:22.862 "zoned": false, 00:10:22.862 "supported_io_types": { 00:10:22.862 "read": true, 00:10:22.862 "write": true, 00:10:22.862 "unmap": true, 00:10:22.862 "flush": true, 00:10:22.862 "reset": true, 00:10:22.862 "nvme_admin": false, 00:10:22.862 "nvme_io": false, 00:10:22.862 "nvme_io_md": false, 00:10:22.862 "write_zeroes": true, 00:10:22.862 "zcopy": true, 00:10:22.862 "get_zone_info": false, 00:10:22.862 "zone_management": false, 00:10:22.862 "zone_append": false, 00:10:22.862 "compare": false, 00:10:22.862 "compare_and_write": false, 00:10:22.862 "abort": true, 00:10:22.862 "seek_hole": false, 00:10:22.862 "seek_data": false, 00:10:22.862 "copy": true, 00:10:22.862 "nvme_iov_md": false 00:10:22.862 }, 00:10:22.862 "memory_domains": [ 00:10:22.862 { 00:10:22.862 "dma_device_id": "system", 00:10:22.862 "dma_device_type": 1 00:10:22.862 }, 00:10:22.862 { 00:10:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.862 "dma_device_type": 2 00:10:22.862 } 00:10:22.862 ], 00:10:22.862 "driver_specific": {} 00:10:22.862 } 00:10:22.862 ] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.862 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.863 "name": "Existed_Raid", 00:10:22.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.863 "strip_size_kb": 0, 00:10:22.863 "state": "configuring", 00:10:22.863 "raid_level": "raid1", 00:10:22.863 "superblock": false, 00:10:22.863 "num_base_bdevs": 3, 00:10:22.863 "num_base_bdevs_discovered": 1, 00:10:22.863 "num_base_bdevs_operational": 3, 00:10:22.863 "base_bdevs_list": [ 00:10:22.863 { 00:10:22.863 "name": "BaseBdev1", 00:10:22.863 "uuid": "f27a9edf-e47b-48d5-b8ce-2cde380ee72b", 00:10:22.863 "is_configured": true, 00:10:22.863 "data_offset": 0, 00:10:22.863 "data_size": 65536 00:10:22.863 }, 00:10:22.863 { 00:10:22.863 "name": "BaseBdev2", 00:10:22.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.863 "is_configured": false, 00:10:22.863 "data_offset": 0, 00:10:22.863 "data_size": 0 00:10:22.863 }, 00:10:22.863 { 00:10:22.863 "name": "BaseBdev3", 00:10:22.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.863 "is_configured": false, 00:10:22.863 "data_offset": 0, 00:10:22.863 "data_size": 0 00:10:22.863 } 00:10:22.863 ] 00:10:22.863 }' 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.863 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.431 [2024-11-20 10:33:26.677900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.431 [2024-11-20 10:33:26.678035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.431 [2024-11-20 10:33:26.689925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.431 [2024-11-20 10:33:26.692016] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.431 [2024-11-20 10:33:26.692076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.431 [2024-11-20 10:33:26.692089] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.431 [2024-11-20 10:33:26.692103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.431 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.432 "name": "Existed_Raid", 00:10:23.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.432 "strip_size_kb": 0, 00:10:23.432 "state": "configuring", 00:10:23.432 "raid_level": "raid1", 00:10:23.432 "superblock": false, 00:10:23.432 "num_base_bdevs": 3, 00:10:23.432 "num_base_bdevs_discovered": 1, 00:10:23.432 "num_base_bdevs_operational": 3, 00:10:23.432 "base_bdevs_list": [ 00:10:23.432 { 00:10:23.432 "name": "BaseBdev1", 00:10:23.432 "uuid": "f27a9edf-e47b-48d5-b8ce-2cde380ee72b", 00:10:23.432 "is_configured": true, 00:10:23.432 "data_offset": 0, 00:10:23.432 "data_size": 65536 00:10:23.432 }, 00:10:23.432 { 00:10:23.432 "name": "BaseBdev2", 00:10:23.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.432 "is_configured": false, 00:10:23.432 "data_offset": 0, 00:10:23.432 "data_size": 0 00:10:23.432 }, 00:10:23.432 { 00:10:23.432 "name": "BaseBdev3", 00:10:23.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.432 "is_configured": false, 00:10:23.432 "data_offset": 0, 00:10:23.432 "data_size": 0 00:10:23.432 } 00:10:23.432 ] 00:10:23.432 }' 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.432 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.011 [2024-11-20 10:33:27.250574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.011 BaseBdev2 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.011 [ 00:10:24.011 { 00:10:24.011 "name": "BaseBdev2", 00:10:24.011 "aliases": [ 00:10:24.011 "ea155874-9948-439d-a7dd-cac20980f32e" 00:10:24.011 ], 00:10:24.011 "product_name": "Malloc disk", 00:10:24.011 "block_size": 512, 00:10:24.011 "num_blocks": 65536, 00:10:24.011 "uuid": "ea155874-9948-439d-a7dd-cac20980f32e", 00:10:24.011 "assigned_rate_limits": { 00:10:24.011 "rw_ios_per_sec": 0, 00:10:24.011 "rw_mbytes_per_sec": 0, 00:10:24.011 "r_mbytes_per_sec": 0, 00:10:24.011 "w_mbytes_per_sec": 0 00:10:24.011 }, 00:10:24.011 "claimed": true, 00:10:24.011 "claim_type": "exclusive_write", 00:10:24.011 "zoned": false, 00:10:24.011 "supported_io_types": { 00:10:24.011 "read": true, 00:10:24.011 "write": true, 00:10:24.011 "unmap": true, 00:10:24.011 "flush": true, 00:10:24.011 "reset": true, 00:10:24.011 "nvme_admin": false, 00:10:24.011 "nvme_io": false, 00:10:24.011 "nvme_io_md": false, 00:10:24.011 "write_zeroes": true, 00:10:24.011 "zcopy": true, 00:10:24.011 "get_zone_info": false, 00:10:24.011 "zone_management": false, 00:10:24.011 "zone_append": false, 00:10:24.011 "compare": false, 00:10:24.011 "compare_and_write": false, 00:10:24.011 "abort": true, 00:10:24.011 "seek_hole": false, 00:10:24.011 "seek_data": false, 00:10:24.011 "copy": true, 00:10:24.011 "nvme_iov_md": false 00:10:24.011 }, 00:10:24.011 "memory_domains": [ 00:10:24.011 { 00:10:24.011 "dma_device_id": "system", 00:10:24.011 "dma_device_type": 1 00:10:24.011 }, 00:10:24.011 { 00:10:24.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.011 "dma_device_type": 2 00:10:24.011 } 00:10:24.011 ], 00:10:24.011 "driver_specific": {} 00:10:24.011 } 00:10:24.011 ] 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.011 "name": "Existed_Raid", 00:10:24.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.011 "strip_size_kb": 0, 00:10:24.011 "state": "configuring", 00:10:24.011 "raid_level": "raid1", 00:10:24.011 "superblock": false, 00:10:24.011 "num_base_bdevs": 3, 00:10:24.011 "num_base_bdevs_discovered": 2, 00:10:24.011 "num_base_bdevs_operational": 3, 00:10:24.011 "base_bdevs_list": [ 00:10:24.011 { 00:10:24.011 "name": "BaseBdev1", 00:10:24.011 "uuid": "f27a9edf-e47b-48d5-b8ce-2cde380ee72b", 00:10:24.011 "is_configured": true, 00:10:24.011 "data_offset": 0, 00:10:24.011 "data_size": 65536 00:10:24.011 }, 00:10:24.011 { 00:10:24.011 "name": "BaseBdev2", 00:10:24.011 "uuid": "ea155874-9948-439d-a7dd-cac20980f32e", 00:10:24.011 "is_configured": true, 00:10:24.011 "data_offset": 0, 00:10:24.011 "data_size": 65536 00:10:24.011 }, 00:10:24.011 { 00:10:24.011 "name": "BaseBdev3", 00:10:24.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.011 "is_configured": false, 00:10:24.011 "data_offset": 0, 00:10:24.011 "data_size": 0 00:10:24.011 } 00:10:24.011 ] 00:10:24.011 }' 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.011 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.270 [2024-11-20 10:33:27.735120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.270 [2024-11-20 10:33:27.735175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.270 [2024-11-20 10:33:27.735189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:24.270 [2024-11-20 10:33:27.735510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:24.270 [2024-11-20 10:33:27.735686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.270 [2024-11-20 10:33:27.735695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:24.270 [2024-11-20 10:33:27.736005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.270 BaseBdev3 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.270 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.530 [ 00:10:24.530 { 00:10:24.530 "name": "BaseBdev3", 00:10:24.530 "aliases": [ 00:10:24.530 "9a550cdf-9b17-46ae-bad3-3f126a506108" 00:10:24.530 ], 00:10:24.530 "product_name": "Malloc disk", 00:10:24.530 "block_size": 512, 00:10:24.530 "num_blocks": 65536, 00:10:24.530 "uuid": "9a550cdf-9b17-46ae-bad3-3f126a506108", 00:10:24.530 "assigned_rate_limits": { 00:10:24.530 "rw_ios_per_sec": 0, 00:10:24.530 "rw_mbytes_per_sec": 0, 00:10:24.530 "r_mbytes_per_sec": 0, 00:10:24.530 "w_mbytes_per_sec": 0 00:10:24.530 }, 00:10:24.530 "claimed": true, 00:10:24.530 "claim_type": "exclusive_write", 00:10:24.530 "zoned": false, 00:10:24.530 "supported_io_types": { 00:10:24.530 "read": true, 00:10:24.530 "write": true, 00:10:24.530 "unmap": true, 00:10:24.530 "flush": true, 00:10:24.530 "reset": true, 00:10:24.530 "nvme_admin": false, 00:10:24.530 "nvme_io": false, 00:10:24.530 "nvme_io_md": false, 00:10:24.530 "write_zeroes": true, 00:10:24.530 "zcopy": true, 00:10:24.530 "get_zone_info": false, 00:10:24.530 "zone_management": false, 00:10:24.530 "zone_append": false, 00:10:24.530 "compare": false, 00:10:24.530 "compare_and_write": false, 00:10:24.530 "abort": true, 00:10:24.530 "seek_hole": false, 00:10:24.530 "seek_data": false, 00:10:24.530 "copy": true, 00:10:24.530 "nvme_iov_md": false 00:10:24.530 }, 00:10:24.530 "memory_domains": [ 00:10:24.530 { 00:10:24.530 "dma_device_id": "system", 00:10:24.530 "dma_device_type": 1 00:10:24.530 }, 00:10:24.530 { 00:10:24.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.530 "dma_device_type": 2 00:10:24.530 } 00:10:24.530 ], 00:10:24.530 "driver_specific": {} 00:10:24.530 } 00:10:24.530 ] 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.530 "name": "Existed_Raid", 00:10:24.530 "uuid": "a594ca84-c40d-4b55-a23c-f969505aa155", 00:10:24.530 "strip_size_kb": 0, 00:10:24.530 "state": "online", 00:10:24.530 "raid_level": "raid1", 00:10:24.530 "superblock": false, 00:10:24.530 "num_base_bdevs": 3, 00:10:24.530 "num_base_bdevs_discovered": 3, 00:10:24.530 "num_base_bdevs_operational": 3, 00:10:24.530 "base_bdevs_list": [ 00:10:24.530 { 00:10:24.530 "name": "BaseBdev1", 00:10:24.530 "uuid": "f27a9edf-e47b-48d5-b8ce-2cde380ee72b", 00:10:24.530 "is_configured": true, 00:10:24.530 "data_offset": 0, 00:10:24.530 "data_size": 65536 00:10:24.530 }, 00:10:24.530 { 00:10:24.530 "name": "BaseBdev2", 00:10:24.530 "uuid": "ea155874-9948-439d-a7dd-cac20980f32e", 00:10:24.530 "is_configured": true, 00:10:24.530 "data_offset": 0, 00:10:24.530 "data_size": 65536 00:10:24.530 }, 00:10:24.530 { 00:10:24.530 "name": "BaseBdev3", 00:10:24.530 "uuid": "9a550cdf-9b17-46ae-bad3-3f126a506108", 00:10:24.530 "is_configured": true, 00:10:24.530 "data_offset": 0, 00:10:24.530 "data_size": 65536 00:10:24.530 } 00:10:24.530 ] 00:10:24.530 }' 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.530 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.790 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.791 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.791 [2024-11-20 10:33:28.210722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.791 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.791 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.791 "name": "Existed_Raid", 00:10:24.791 "aliases": [ 00:10:24.791 "a594ca84-c40d-4b55-a23c-f969505aa155" 00:10:24.791 ], 00:10:24.791 "product_name": "Raid Volume", 00:10:24.791 "block_size": 512, 00:10:24.791 "num_blocks": 65536, 00:10:24.791 "uuid": "a594ca84-c40d-4b55-a23c-f969505aa155", 00:10:24.791 "assigned_rate_limits": { 00:10:24.791 "rw_ios_per_sec": 0, 00:10:24.791 "rw_mbytes_per_sec": 0, 00:10:24.791 "r_mbytes_per_sec": 0, 00:10:24.791 "w_mbytes_per_sec": 0 00:10:24.791 }, 00:10:24.791 "claimed": false, 00:10:24.791 "zoned": false, 00:10:24.791 "supported_io_types": { 00:10:24.791 "read": true, 00:10:24.791 "write": true, 00:10:24.791 "unmap": false, 00:10:24.791 "flush": false, 00:10:24.791 "reset": true, 00:10:24.791 "nvme_admin": false, 00:10:24.791 "nvme_io": false, 00:10:24.791 "nvme_io_md": false, 00:10:24.791 "write_zeroes": true, 00:10:24.791 "zcopy": false, 00:10:24.791 "get_zone_info": false, 00:10:24.791 "zone_management": false, 00:10:24.791 "zone_append": false, 00:10:24.791 "compare": false, 00:10:24.791 "compare_and_write": false, 00:10:24.791 "abort": false, 00:10:24.791 "seek_hole": false, 00:10:24.791 "seek_data": false, 00:10:24.791 "copy": false, 00:10:24.791 "nvme_iov_md": false 00:10:24.791 }, 00:10:24.791 "memory_domains": [ 00:10:24.791 { 00:10:24.791 "dma_device_id": "system", 00:10:24.791 "dma_device_type": 1 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.791 "dma_device_type": 2 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "dma_device_id": "system", 00:10:24.791 "dma_device_type": 1 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.791 "dma_device_type": 2 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "dma_device_id": "system", 00:10:24.791 "dma_device_type": 1 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.791 "dma_device_type": 2 00:10:24.791 } 00:10:24.791 ], 00:10:24.791 "driver_specific": { 00:10:24.791 "raid": { 00:10:24.791 "uuid": "a594ca84-c40d-4b55-a23c-f969505aa155", 00:10:24.791 "strip_size_kb": 0, 00:10:24.791 "state": "online", 00:10:24.791 "raid_level": "raid1", 00:10:24.791 "superblock": false, 00:10:24.791 "num_base_bdevs": 3, 00:10:24.791 "num_base_bdevs_discovered": 3, 00:10:24.791 "num_base_bdevs_operational": 3, 00:10:24.791 "base_bdevs_list": [ 00:10:24.791 { 00:10:24.791 "name": "BaseBdev1", 00:10:24.791 "uuid": "f27a9edf-e47b-48d5-b8ce-2cde380ee72b", 00:10:24.791 "is_configured": true, 00:10:24.791 "data_offset": 0, 00:10:24.791 "data_size": 65536 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "name": "BaseBdev2", 00:10:24.791 "uuid": "ea155874-9948-439d-a7dd-cac20980f32e", 00:10:24.791 "is_configured": true, 00:10:24.791 "data_offset": 0, 00:10:24.791 "data_size": 65536 00:10:24.791 }, 00:10:24.791 { 00:10:24.791 "name": "BaseBdev3", 00:10:24.791 "uuid": "9a550cdf-9b17-46ae-bad3-3f126a506108", 00:10:24.791 "is_configured": true, 00:10:24.791 "data_offset": 0, 00:10:24.791 "data_size": 65536 00:10:24.791 } 00:10:24.791 ] 00:10:24.791 } 00:10:24.791 } 00:10:24.791 }' 00:10:24.791 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.050 BaseBdev2 00:10:25.050 BaseBdev3' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.050 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.050 [2024-11-20 10:33:28.462059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.308 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.309 "name": "Existed_Raid", 00:10:25.309 "uuid": "a594ca84-c40d-4b55-a23c-f969505aa155", 00:10:25.309 "strip_size_kb": 0, 00:10:25.309 "state": "online", 00:10:25.309 "raid_level": "raid1", 00:10:25.309 "superblock": false, 00:10:25.309 "num_base_bdevs": 3, 00:10:25.309 "num_base_bdevs_discovered": 2, 00:10:25.309 "num_base_bdevs_operational": 2, 00:10:25.309 "base_bdevs_list": [ 00:10:25.309 { 00:10:25.309 "name": null, 00:10:25.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.309 "is_configured": false, 00:10:25.309 "data_offset": 0, 00:10:25.309 "data_size": 65536 00:10:25.309 }, 00:10:25.309 { 00:10:25.309 "name": "BaseBdev2", 00:10:25.309 "uuid": "ea155874-9948-439d-a7dd-cac20980f32e", 00:10:25.309 "is_configured": true, 00:10:25.309 "data_offset": 0, 00:10:25.309 "data_size": 65536 00:10:25.309 }, 00:10:25.309 { 00:10:25.309 "name": "BaseBdev3", 00:10:25.309 "uuid": "9a550cdf-9b17-46ae-bad3-3f126a506108", 00:10:25.309 "is_configured": true, 00:10:25.309 "data_offset": 0, 00:10:25.309 "data_size": 65536 00:10:25.309 } 00:10:25.309 ] 00:10:25.309 }' 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.309 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.567 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.567 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.567 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.567 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.567 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.567 10:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.567 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.567 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.567 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.567 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.567 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.567 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 [2024-11-20 10:33:29.043552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.826 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 [2024-11-20 10:33:29.201462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.826 [2024-11-20 10:33:29.201626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.085 [2024-11-20 10:33:29.311608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.085 [2024-11-20 10:33:29.311740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.085 [2024-11-20 10:33:29.311793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 BaseBdev2 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 [ 00:10:26.085 { 00:10:26.085 "name": "BaseBdev2", 00:10:26.085 "aliases": [ 00:10:26.085 "22adee90-ccbc-43df-b313-0862323cda9b" 00:10:26.085 ], 00:10:26.085 "product_name": "Malloc disk", 00:10:26.085 "block_size": 512, 00:10:26.085 "num_blocks": 65536, 00:10:26.085 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:26.085 "assigned_rate_limits": { 00:10:26.085 "rw_ios_per_sec": 0, 00:10:26.085 "rw_mbytes_per_sec": 0, 00:10:26.085 "r_mbytes_per_sec": 0, 00:10:26.085 "w_mbytes_per_sec": 0 00:10:26.085 }, 00:10:26.085 "claimed": false, 00:10:26.085 "zoned": false, 00:10:26.085 "supported_io_types": { 00:10:26.085 "read": true, 00:10:26.085 "write": true, 00:10:26.085 "unmap": true, 00:10:26.085 "flush": true, 00:10:26.085 "reset": true, 00:10:26.085 "nvme_admin": false, 00:10:26.085 "nvme_io": false, 00:10:26.085 "nvme_io_md": false, 00:10:26.085 "write_zeroes": true, 00:10:26.085 "zcopy": true, 00:10:26.085 "get_zone_info": false, 00:10:26.085 "zone_management": false, 00:10:26.085 "zone_append": false, 00:10:26.085 "compare": false, 00:10:26.085 "compare_and_write": false, 00:10:26.085 "abort": true, 00:10:26.085 "seek_hole": false, 00:10:26.085 "seek_data": false, 00:10:26.085 "copy": true, 00:10:26.085 "nvme_iov_md": false 00:10:26.085 }, 00:10:26.085 "memory_domains": [ 00:10:26.085 { 00:10:26.085 "dma_device_id": "system", 00:10:26.085 "dma_device_type": 1 00:10:26.085 }, 00:10:26.085 { 00:10:26.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.085 "dma_device_type": 2 00:10:26.085 } 00:10:26.085 ], 00:10:26.085 "driver_specific": {} 00:10:26.085 } 00:10:26.085 ] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 BaseBdev3 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 [ 00:10:26.085 { 00:10:26.085 "name": "BaseBdev3", 00:10:26.085 "aliases": [ 00:10:26.085 "e62fc542-5243-44aa-b4f7-f66bc109e722" 00:10:26.085 ], 00:10:26.085 "product_name": "Malloc disk", 00:10:26.085 "block_size": 512, 00:10:26.085 "num_blocks": 65536, 00:10:26.085 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:26.085 "assigned_rate_limits": { 00:10:26.085 "rw_ios_per_sec": 0, 00:10:26.085 "rw_mbytes_per_sec": 0, 00:10:26.085 "r_mbytes_per_sec": 0, 00:10:26.085 "w_mbytes_per_sec": 0 00:10:26.085 }, 00:10:26.085 "claimed": false, 00:10:26.085 "zoned": false, 00:10:26.085 "supported_io_types": { 00:10:26.085 "read": true, 00:10:26.085 "write": true, 00:10:26.085 "unmap": true, 00:10:26.085 "flush": true, 00:10:26.085 "reset": true, 00:10:26.085 "nvme_admin": false, 00:10:26.085 "nvme_io": false, 00:10:26.085 "nvme_io_md": false, 00:10:26.085 "write_zeroes": true, 00:10:26.085 "zcopy": true, 00:10:26.085 "get_zone_info": false, 00:10:26.085 "zone_management": false, 00:10:26.085 "zone_append": false, 00:10:26.085 "compare": false, 00:10:26.085 "compare_and_write": false, 00:10:26.085 "abort": true, 00:10:26.085 "seek_hole": false, 00:10:26.085 "seek_data": false, 00:10:26.085 "copy": true, 00:10:26.085 "nvme_iov_md": false 00:10:26.085 }, 00:10:26.085 "memory_domains": [ 00:10:26.085 { 00:10:26.085 "dma_device_id": "system", 00:10:26.085 "dma_device_type": 1 00:10:26.085 }, 00:10:26.085 { 00:10:26.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.085 "dma_device_type": 2 00:10:26.085 } 00:10:26.085 ], 00:10:26.085 "driver_specific": {} 00:10:26.085 } 00:10:26.085 ] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.085 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 [2024-11-20 10:33:29.494197] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.085 [2024-11-20 10:33:29.494307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.085 [2024-11-20 10:33:29.494373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.085 [2024-11-20 10:33:29.496547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.086 "name": "Existed_Raid", 00:10:26.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.086 "strip_size_kb": 0, 00:10:26.086 "state": "configuring", 00:10:26.086 "raid_level": "raid1", 00:10:26.086 "superblock": false, 00:10:26.086 "num_base_bdevs": 3, 00:10:26.086 "num_base_bdevs_discovered": 2, 00:10:26.086 "num_base_bdevs_operational": 3, 00:10:26.086 "base_bdevs_list": [ 00:10:26.086 { 00:10:26.086 "name": "BaseBdev1", 00:10:26.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.086 "is_configured": false, 00:10:26.086 "data_offset": 0, 00:10:26.086 "data_size": 0 00:10:26.086 }, 00:10:26.086 { 00:10:26.086 "name": "BaseBdev2", 00:10:26.086 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:26.086 "is_configured": true, 00:10:26.086 "data_offset": 0, 00:10:26.086 "data_size": 65536 00:10:26.086 }, 00:10:26.086 { 00:10:26.086 "name": "BaseBdev3", 00:10:26.086 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:26.086 "is_configured": true, 00:10:26.086 "data_offset": 0, 00:10:26.086 "data_size": 65536 00:10:26.086 } 00:10:26.086 ] 00:10:26.086 }' 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.086 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 [2024-11-20 10:33:29.969394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 10:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.652 "name": "Existed_Raid", 00:10:26.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.652 "strip_size_kb": 0, 00:10:26.652 "state": "configuring", 00:10:26.652 "raid_level": "raid1", 00:10:26.652 "superblock": false, 00:10:26.652 "num_base_bdevs": 3, 00:10:26.652 "num_base_bdevs_discovered": 1, 00:10:26.652 "num_base_bdevs_operational": 3, 00:10:26.652 "base_bdevs_list": [ 00:10:26.652 { 00:10:26.652 "name": "BaseBdev1", 00:10:26.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.653 "is_configured": false, 00:10:26.653 "data_offset": 0, 00:10:26.653 "data_size": 0 00:10:26.653 }, 00:10:26.653 { 00:10:26.653 "name": null, 00:10:26.653 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:26.653 "is_configured": false, 00:10:26.653 "data_offset": 0, 00:10:26.653 "data_size": 65536 00:10:26.653 }, 00:10:26.653 { 00:10:26.653 "name": "BaseBdev3", 00:10:26.653 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:26.653 "is_configured": true, 00:10:26.653 "data_offset": 0, 00:10:26.653 "data_size": 65536 00:10:26.653 } 00:10:26.653 ] 00:10:26.653 }' 00:10:26.653 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.653 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 [2024-11-20 10:33:30.505417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.219 BaseBdev1 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 [ 00:10:27.219 { 00:10:27.219 "name": "BaseBdev1", 00:10:27.219 "aliases": [ 00:10:27.219 "7e48d129-b437-4779-8f5c-17f8ca3cee95" 00:10:27.219 ], 00:10:27.219 "product_name": "Malloc disk", 00:10:27.219 "block_size": 512, 00:10:27.219 "num_blocks": 65536, 00:10:27.219 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:27.219 "assigned_rate_limits": { 00:10:27.219 "rw_ios_per_sec": 0, 00:10:27.219 "rw_mbytes_per_sec": 0, 00:10:27.219 "r_mbytes_per_sec": 0, 00:10:27.219 "w_mbytes_per_sec": 0 00:10:27.219 }, 00:10:27.219 "claimed": true, 00:10:27.219 "claim_type": "exclusive_write", 00:10:27.219 "zoned": false, 00:10:27.219 "supported_io_types": { 00:10:27.219 "read": true, 00:10:27.219 "write": true, 00:10:27.219 "unmap": true, 00:10:27.219 "flush": true, 00:10:27.219 "reset": true, 00:10:27.219 "nvme_admin": false, 00:10:27.219 "nvme_io": false, 00:10:27.219 "nvme_io_md": false, 00:10:27.219 "write_zeroes": true, 00:10:27.219 "zcopy": true, 00:10:27.219 "get_zone_info": false, 00:10:27.219 "zone_management": false, 00:10:27.219 "zone_append": false, 00:10:27.219 "compare": false, 00:10:27.219 "compare_and_write": false, 00:10:27.219 "abort": true, 00:10:27.219 "seek_hole": false, 00:10:27.219 "seek_data": false, 00:10:27.219 "copy": true, 00:10:27.219 "nvme_iov_md": false 00:10:27.219 }, 00:10:27.219 "memory_domains": [ 00:10:27.219 { 00:10:27.219 "dma_device_id": "system", 00:10:27.219 "dma_device_type": 1 00:10:27.219 }, 00:10:27.219 { 00:10:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.219 "dma_device_type": 2 00:10:27.219 } 00:10:27.219 ], 00:10:27.219 "driver_specific": {} 00:10:27.219 } 00:10:27.219 ] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.219 "name": "Existed_Raid", 00:10:27.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.219 "strip_size_kb": 0, 00:10:27.219 "state": "configuring", 00:10:27.219 "raid_level": "raid1", 00:10:27.219 "superblock": false, 00:10:27.219 "num_base_bdevs": 3, 00:10:27.219 "num_base_bdevs_discovered": 2, 00:10:27.219 "num_base_bdevs_operational": 3, 00:10:27.219 "base_bdevs_list": [ 00:10:27.219 { 00:10:27.219 "name": "BaseBdev1", 00:10:27.219 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:27.219 "is_configured": true, 00:10:27.219 "data_offset": 0, 00:10:27.219 "data_size": 65536 00:10:27.219 }, 00:10:27.219 { 00:10:27.219 "name": null, 00:10:27.219 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:27.219 "is_configured": false, 00:10:27.219 "data_offset": 0, 00:10:27.219 "data_size": 65536 00:10:27.219 }, 00:10:27.219 { 00:10:27.219 "name": "BaseBdev3", 00:10:27.219 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:27.219 "is_configured": true, 00:10:27.219 "data_offset": 0, 00:10:27.219 "data_size": 65536 00:10:27.219 } 00:10:27.219 ] 00:10:27.219 }' 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.219 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.784 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:27.784 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.784 [2024-11-20 10:33:31.036552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.784 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.785 "name": "Existed_Raid", 00:10:27.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.785 "strip_size_kb": 0, 00:10:27.785 "state": "configuring", 00:10:27.785 "raid_level": "raid1", 00:10:27.785 "superblock": false, 00:10:27.785 "num_base_bdevs": 3, 00:10:27.785 "num_base_bdevs_discovered": 1, 00:10:27.785 "num_base_bdevs_operational": 3, 00:10:27.785 "base_bdevs_list": [ 00:10:27.785 { 00:10:27.785 "name": "BaseBdev1", 00:10:27.785 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:27.785 "is_configured": true, 00:10:27.785 "data_offset": 0, 00:10:27.785 "data_size": 65536 00:10:27.785 }, 00:10:27.785 { 00:10:27.785 "name": null, 00:10:27.785 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:27.785 "is_configured": false, 00:10:27.785 "data_offset": 0, 00:10:27.785 "data_size": 65536 00:10:27.785 }, 00:10:27.785 { 00:10:27.785 "name": null, 00:10:27.785 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:27.785 "is_configured": false, 00:10:27.785 "data_offset": 0, 00:10:27.785 "data_size": 65536 00:10:27.785 } 00:10:27.785 ] 00:10:27.785 }' 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.785 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.042 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.042 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.042 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.042 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 [2024-11-20 10:33:31.563870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.301 "name": "Existed_Raid", 00:10:28.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.301 "strip_size_kb": 0, 00:10:28.301 "state": "configuring", 00:10:28.301 "raid_level": "raid1", 00:10:28.301 "superblock": false, 00:10:28.301 "num_base_bdevs": 3, 00:10:28.301 "num_base_bdevs_discovered": 2, 00:10:28.301 "num_base_bdevs_operational": 3, 00:10:28.301 "base_bdevs_list": [ 00:10:28.301 { 00:10:28.301 "name": "BaseBdev1", 00:10:28.301 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:28.301 "is_configured": true, 00:10:28.301 "data_offset": 0, 00:10:28.301 "data_size": 65536 00:10:28.301 }, 00:10:28.301 { 00:10:28.301 "name": null, 00:10:28.301 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:28.301 "is_configured": false, 00:10:28.301 "data_offset": 0, 00:10:28.301 "data_size": 65536 00:10:28.301 }, 00:10:28.301 { 00:10:28.301 "name": "BaseBdev3", 00:10:28.301 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:28.301 "is_configured": true, 00:10:28.301 "data_offset": 0, 00:10:28.301 "data_size": 65536 00:10:28.301 } 00:10:28.301 ] 00:10:28.301 }' 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.301 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.559 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.559 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.559 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.559 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.559 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.559 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:28.559 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.559 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.559 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.817 [2024-11-20 10:33:32.039014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.817 "name": "Existed_Raid", 00:10:28.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.817 "strip_size_kb": 0, 00:10:28.817 "state": "configuring", 00:10:28.817 "raid_level": "raid1", 00:10:28.817 "superblock": false, 00:10:28.817 "num_base_bdevs": 3, 00:10:28.817 "num_base_bdevs_discovered": 1, 00:10:28.817 "num_base_bdevs_operational": 3, 00:10:28.817 "base_bdevs_list": [ 00:10:28.817 { 00:10:28.817 "name": null, 00:10:28.817 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:28.817 "is_configured": false, 00:10:28.817 "data_offset": 0, 00:10:28.817 "data_size": 65536 00:10:28.817 }, 00:10:28.817 { 00:10:28.817 "name": null, 00:10:28.817 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:28.817 "is_configured": false, 00:10:28.817 "data_offset": 0, 00:10:28.817 "data_size": 65536 00:10:28.817 }, 00:10:28.817 { 00:10:28.817 "name": "BaseBdev3", 00:10:28.817 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:28.817 "is_configured": true, 00:10:28.817 "data_offset": 0, 00:10:28.817 "data_size": 65536 00:10:28.817 } 00:10:28.817 ] 00:10:28.817 }' 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.817 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.384 [2024-11-20 10:33:32.623866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.384 "name": "Existed_Raid", 00:10:29.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.384 "strip_size_kb": 0, 00:10:29.384 "state": "configuring", 00:10:29.384 "raid_level": "raid1", 00:10:29.384 "superblock": false, 00:10:29.384 "num_base_bdevs": 3, 00:10:29.384 "num_base_bdevs_discovered": 2, 00:10:29.384 "num_base_bdevs_operational": 3, 00:10:29.384 "base_bdevs_list": [ 00:10:29.384 { 00:10:29.384 "name": null, 00:10:29.384 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:29.384 "is_configured": false, 00:10:29.384 "data_offset": 0, 00:10:29.384 "data_size": 65536 00:10:29.384 }, 00:10:29.384 { 00:10:29.384 "name": "BaseBdev2", 00:10:29.384 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:29.384 "is_configured": true, 00:10:29.384 "data_offset": 0, 00:10:29.384 "data_size": 65536 00:10:29.384 }, 00:10:29.384 { 00:10:29.384 "name": "BaseBdev3", 00:10:29.384 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:29.384 "is_configured": true, 00:10:29.384 "data_offset": 0, 00:10:29.384 "data_size": 65536 00:10:29.384 } 00:10:29.384 ] 00:10:29.384 }' 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.384 10:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.643 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.643 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.643 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.643 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.643 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.643 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7e48d129-b437-4779-8f5c-17f8ca3cee95 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.902 [2024-11-20 10:33:33.209946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:29.902 [2024-11-20 10:33:33.210109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.902 [2024-11-20 10:33:33.210135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:29.902 [2024-11-20 10:33:33.210468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:29.902 [2024-11-20 10:33:33.210696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.902 [2024-11-20 10:33:33.210748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:29.902 [2024-11-20 10:33:33.211084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.902 NewBaseBdev 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.902 [ 00:10:29.902 { 00:10:29.902 "name": "NewBaseBdev", 00:10:29.902 "aliases": [ 00:10:29.902 "7e48d129-b437-4779-8f5c-17f8ca3cee95" 00:10:29.902 ], 00:10:29.902 "product_name": "Malloc disk", 00:10:29.902 "block_size": 512, 00:10:29.902 "num_blocks": 65536, 00:10:29.902 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:29.902 "assigned_rate_limits": { 00:10:29.902 "rw_ios_per_sec": 0, 00:10:29.902 "rw_mbytes_per_sec": 0, 00:10:29.902 "r_mbytes_per_sec": 0, 00:10:29.902 "w_mbytes_per_sec": 0 00:10:29.902 }, 00:10:29.902 "claimed": true, 00:10:29.902 "claim_type": "exclusive_write", 00:10:29.902 "zoned": false, 00:10:29.902 "supported_io_types": { 00:10:29.902 "read": true, 00:10:29.902 "write": true, 00:10:29.902 "unmap": true, 00:10:29.902 "flush": true, 00:10:29.902 "reset": true, 00:10:29.902 "nvme_admin": false, 00:10:29.902 "nvme_io": false, 00:10:29.902 "nvme_io_md": false, 00:10:29.902 "write_zeroes": true, 00:10:29.902 "zcopy": true, 00:10:29.902 "get_zone_info": false, 00:10:29.902 "zone_management": false, 00:10:29.902 "zone_append": false, 00:10:29.902 "compare": false, 00:10:29.902 "compare_and_write": false, 00:10:29.902 "abort": true, 00:10:29.902 "seek_hole": false, 00:10:29.902 "seek_data": false, 00:10:29.902 "copy": true, 00:10:29.902 "nvme_iov_md": false 00:10:29.902 }, 00:10:29.902 "memory_domains": [ 00:10:29.902 { 00:10:29.902 "dma_device_id": "system", 00:10:29.902 "dma_device_type": 1 00:10:29.902 }, 00:10:29.902 { 00:10:29.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.902 "dma_device_type": 2 00:10:29.902 } 00:10:29.902 ], 00:10:29.902 "driver_specific": {} 00:10:29.902 } 00:10:29.902 ] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.902 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.903 "name": "Existed_Raid", 00:10:29.903 "uuid": "3feddc61-eed3-4572-a109-1a1569a185f3", 00:10:29.903 "strip_size_kb": 0, 00:10:29.903 "state": "online", 00:10:29.903 "raid_level": "raid1", 00:10:29.903 "superblock": false, 00:10:29.903 "num_base_bdevs": 3, 00:10:29.903 "num_base_bdevs_discovered": 3, 00:10:29.903 "num_base_bdevs_operational": 3, 00:10:29.903 "base_bdevs_list": [ 00:10:29.903 { 00:10:29.903 "name": "NewBaseBdev", 00:10:29.903 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:29.903 "is_configured": true, 00:10:29.903 "data_offset": 0, 00:10:29.903 "data_size": 65536 00:10:29.903 }, 00:10:29.903 { 00:10:29.903 "name": "BaseBdev2", 00:10:29.903 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:29.903 "is_configured": true, 00:10:29.903 "data_offset": 0, 00:10:29.903 "data_size": 65536 00:10:29.903 }, 00:10:29.903 { 00:10:29.903 "name": "BaseBdev3", 00:10:29.903 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:29.903 "is_configured": true, 00:10:29.903 "data_offset": 0, 00:10:29.903 "data_size": 65536 00:10:29.903 } 00:10:29.903 ] 00:10:29.903 }' 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.903 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.469 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.469 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.469 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.469 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 [2024-11-20 10:33:33.677580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.470 "name": "Existed_Raid", 00:10:30.470 "aliases": [ 00:10:30.470 "3feddc61-eed3-4572-a109-1a1569a185f3" 00:10:30.470 ], 00:10:30.470 "product_name": "Raid Volume", 00:10:30.470 "block_size": 512, 00:10:30.470 "num_blocks": 65536, 00:10:30.470 "uuid": "3feddc61-eed3-4572-a109-1a1569a185f3", 00:10:30.470 "assigned_rate_limits": { 00:10:30.470 "rw_ios_per_sec": 0, 00:10:30.470 "rw_mbytes_per_sec": 0, 00:10:30.470 "r_mbytes_per_sec": 0, 00:10:30.470 "w_mbytes_per_sec": 0 00:10:30.470 }, 00:10:30.470 "claimed": false, 00:10:30.470 "zoned": false, 00:10:30.470 "supported_io_types": { 00:10:30.470 "read": true, 00:10:30.470 "write": true, 00:10:30.470 "unmap": false, 00:10:30.470 "flush": false, 00:10:30.470 "reset": true, 00:10:30.470 "nvme_admin": false, 00:10:30.470 "nvme_io": false, 00:10:30.470 "nvme_io_md": false, 00:10:30.470 "write_zeroes": true, 00:10:30.470 "zcopy": false, 00:10:30.470 "get_zone_info": false, 00:10:30.470 "zone_management": false, 00:10:30.470 "zone_append": false, 00:10:30.470 "compare": false, 00:10:30.470 "compare_and_write": false, 00:10:30.470 "abort": false, 00:10:30.470 "seek_hole": false, 00:10:30.470 "seek_data": false, 00:10:30.470 "copy": false, 00:10:30.470 "nvme_iov_md": false 00:10:30.470 }, 00:10:30.470 "memory_domains": [ 00:10:30.470 { 00:10:30.470 "dma_device_id": "system", 00:10:30.470 "dma_device_type": 1 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.470 "dma_device_type": 2 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "dma_device_id": "system", 00:10:30.470 "dma_device_type": 1 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.470 "dma_device_type": 2 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "dma_device_id": "system", 00:10:30.470 "dma_device_type": 1 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.470 "dma_device_type": 2 00:10:30.470 } 00:10:30.470 ], 00:10:30.470 "driver_specific": { 00:10:30.470 "raid": { 00:10:30.470 "uuid": "3feddc61-eed3-4572-a109-1a1569a185f3", 00:10:30.470 "strip_size_kb": 0, 00:10:30.470 "state": "online", 00:10:30.470 "raid_level": "raid1", 00:10:30.470 "superblock": false, 00:10:30.470 "num_base_bdevs": 3, 00:10:30.470 "num_base_bdevs_discovered": 3, 00:10:30.470 "num_base_bdevs_operational": 3, 00:10:30.470 "base_bdevs_list": [ 00:10:30.470 { 00:10:30.470 "name": "NewBaseBdev", 00:10:30.470 "uuid": "7e48d129-b437-4779-8f5c-17f8ca3cee95", 00:10:30.470 "is_configured": true, 00:10:30.470 "data_offset": 0, 00:10:30.470 "data_size": 65536 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "name": "BaseBdev2", 00:10:30.470 "uuid": "22adee90-ccbc-43df-b313-0862323cda9b", 00:10:30.470 "is_configured": true, 00:10:30.470 "data_offset": 0, 00:10:30.470 "data_size": 65536 00:10:30.470 }, 00:10:30.470 { 00:10:30.470 "name": "BaseBdev3", 00:10:30.470 "uuid": "e62fc542-5243-44aa-b4f7-f66bc109e722", 00:10:30.470 "is_configured": true, 00:10:30.470 "data_offset": 0, 00:10:30.470 "data_size": 65536 00:10:30.470 } 00:10:30.470 ] 00:10:30.470 } 00:10:30.470 } 00:10:30.470 }' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:30.470 BaseBdev2 00:10:30.470 BaseBdev3' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 [2024-11-20 10:33:33.928803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.470 [2024-11-20 10:33:33.928890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.470 [2024-11-20 10:33:33.929002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.470 [2024-11-20 10:33:33.929365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.470 [2024-11-20 10:33:33.929438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67576 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67576 ']' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67576 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.470 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67576 00:10:30.729 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.729 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.729 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67576' 00:10:30.729 killing process with pid 67576 00:10:30.729 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67576 00:10:30.729 10:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67576 00:10:30.729 [2024-11-20 10:33:33.974431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.987 [2024-11-20 10:33:34.325461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.388 10:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:32.389 ************************************ 00:10:32.389 END TEST raid_state_function_test 00:10:32.389 ************************************ 00:10:32.389 00:10:32.389 real 0m10.798s 00:10:32.389 user 0m17.132s 00:10:32.389 sys 0m1.803s 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.389 10:33:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:32.389 10:33:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.389 10:33:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.389 10:33:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.389 ************************************ 00:10:32.389 START TEST raid_state_function_test_sb 00:10:32.389 ************************************ 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68206 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68206' 00:10:32.389 Process raid pid: 68206 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68206 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68206 ']' 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.389 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.389 [2024-11-20 10:33:35.689413] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:32.389 [2024-11-20 10:33:35.689629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.647 [2024-11-20 10:33:35.875779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.647 [2024-11-20 10:33:35.994389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.904 [2024-11-20 10:33:36.212689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.904 [2024-11-20 10:33:36.212728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.162 [2024-11-20 10:33:36.566043] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.162 [2024-11-20 10:33:36.566102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.162 [2024-11-20 10:33:36.566113] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.162 [2024-11-20 10:33:36.566124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.162 [2024-11-20 10:33:36.566132] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.162 [2024-11-20 10:33:36.566142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.162 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.162 "name": "Existed_Raid", 00:10:33.162 "uuid": "9a0a97cb-55fb-443c-b1ae-75b053cdc893", 00:10:33.162 "strip_size_kb": 0, 00:10:33.162 "state": "configuring", 00:10:33.162 "raid_level": "raid1", 00:10:33.162 "superblock": true, 00:10:33.162 "num_base_bdevs": 3, 00:10:33.162 "num_base_bdevs_discovered": 0, 00:10:33.162 "num_base_bdevs_operational": 3, 00:10:33.162 "base_bdevs_list": [ 00:10:33.162 { 00:10:33.162 "name": "BaseBdev1", 00:10:33.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.163 "is_configured": false, 00:10:33.163 "data_offset": 0, 00:10:33.163 "data_size": 0 00:10:33.163 }, 00:10:33.163 { 00:10:33.163 "name": "BaseBdev2", 00:10:33.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.163 "is_configured": false, 00:10:33.163 "data_offset": 0, 00:10:33.163 "data_size": 0 00:10:33.163 }, 00:10:33.163 { 00:10:33.163 "name": "BaseBdev3", 00:10:33.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.163 "is_configured": false, 00:10:33.163 "data_offset": 0, 00:10:33.163 "data_size": 0 00:10:33.163 } 00:10:33.163 ] 00:10:33.163 }' 00:10:33.163 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.163 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 [2024-11-20 10:33:37.029237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.730 [2024-11-20 10:33:37.029274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 [2024-11-20 10:33:37.037209] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.730 [2024-11-20 10:33:37.037263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.730 [2024-11-20 10:33:37.037275] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.730 [2024-11-20 10:33:37.037287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.730 [2024-11-20 10:33:37.037294] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.730 [2024-11-20 10:33:37.037305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 [2024-11-20 10:33:37.088836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.730 BaseBdev1 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.730 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 [ 00:10:33.730 { 00:10:33.730 "name": "BaseBdev1", 00:10:33.730 "aliases": [ 00:10:33.730 "572056cb-85fb-4935-a93e-c99ffdd35e78" 00:10:33.730 ], 00:10:33.730 "product_name": "Malloc disk", 00:10:33.730 "block_size": 512, 00:10:33.730 "num_blocks": 65536, 00:10:33.730 "uuid": "572056cb-85fb-4935-a93e-c99ffdd35e78", 00:10:33.730 "assigned_rate_limits": { 00:10:33.730 "rw_ios_per_sec": 0, 00:10:33.730 "rw_mbytes_per_sec": 0, 00:10:33.730 "r_mbytes_per_sec": 0, 00:10:33.730 "w_mbytes_per_sec": 0 00:10:33.730 }, 00:10:33.730 "claimed": true, 00:10:33.730 "claim_type": "exclusive_write", 00:10:33.730 "zoned": false, 00:10:33.730 "supported_io_types": { 00:10:33.730 "read": true, 00:10:33.730 "write": true, 00:10:33.730 "unmap": true, 00:10:33.730 "flush": true, 00:10:33.730 "reset": true, 00:10:33.730 "nvme_admin": false, 00:10:33.730 "nvme_io": false, 00:10:33.730 "nvme_io_md": false, 00:10:33.730 "write_zeroes": true, 00:10:33.730 "zcopy": true, 00:10:33.730 "get_zone_info": false, 00:10:33.731 "zone_management": false, 00:10:33.731 "zone_append": false, 00:10:33.731 "compare": false, 00:10:33.731 "compare_and_write": false, 00:10:33.731 "abort": true, 00:10:33.731 "seek_hole": false, 00:10:33.731 "seek_data": false, 00:10:33.731 "copy": true, 00:10:33.731 "nvme_iov_md": false 00:10:33.731 }, 00:10:33.731 "memory_domains": [ 00:10:33.731 { 00:10:33.731 "dma_device_id": "system", 00:10:33.731 "dma_device_type": 1 00:10:33.731 }, 00:10:33.731 { 00:10:33.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.731 "dma_device_type": 2 00:10:33.731 } 00:10:33.731 ], 00:10:33.731 "driver_specific": {} 00:10:33.731 } 00:10:33.731 ] 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.731 "name": "Existed_Raid", 00:10:33.731 "uuid": "99fe4fce-44e5-4211-a0c3-73dfb2cafb27", 00:10:33.731 "strip_size_kb": 0, 00:10:33.731 "state": "configuring", 00:10:33.731 "raid_level": "raid1", 00:10:33.731 "superblock": true, 00:10:33.731 "num_base_bdevs": 3, 00:10:33.731 "num_base_bdevs_discovered": 1, 00:10:33.731 "num_base_bdevs_operational": 3, 00:10:33.731 "base_bdevs_list": [ 00:10:33.731 { 00:10:33.731 "name": "BaseBdev1", 00:10:33.731 "uuid": "572056cb-85fb-4935-a93e-c99ffdd35e78", 00:10:33.731 "is_configured": true, 00:10:33.731 "data_offset": 2048, 00:10:33.731 "data_size": 63488 00:10:33.731 }, 00:10:33.731 { 00:10:33.731 "name": "BaseBdev2", 00:10:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.731 "is_configured": false, 00:10:33.731 "data_offset": 0, 00:10:33.731 "data_size": 0 00:10:33.731 }, 00:10:33.731 { 00:10:33.731 "name": "BaseBdev3", 00:10:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.731 "is_configured": false, 00:10:33.731 "data_offset": 0, 00:10:33.731 "data_size": 0 00:10:33.731 } 00:10:33.731 ] 00:10:33.731 }' 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.731 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.298 [2024-11-20 10:33:37.576053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.298 [2024-11-20 10:33:37.576148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.298 [2024-11-20 10:33:37.588112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.298 [2024-11-20 10:33:37.590061] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.298 [2024-11-20 10:33:37.590145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.298 [2024-11-20 10:33:37.590178] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.298 [2024-11-20 10:33:37.590218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.298 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.299 "name": "Existed_Raid", 00:10:34.299 "uuid": "63734101-ed59-4f90-ad0c-7ba94421d80c", 00:10:34.299 "strip_size_kb": 0, 00:10:34.299 "state": "configuring", 00:10:34.299 "raid_level": "raid1", 00:10:34.299 "superblock": true, 00:10:34.299 "num_base_bdevs": 3, 00:10:34.299 "num_base_bdevs_discovered": 1, 00:10:34.299 "num_base_bdevs_operational": 3, 00:10:34.299 "base_bdevs_list": [ 00:10:34.299 { 00:10:34.299 "name": "BaseBdev1", 00:10:34.299 "uuid": "572056cb-85fb-4935-a93e-c99ffdd35e78", 00:10:34.299 "is_configured": true, 00:10:34.299 "data_offset": 2048, 00:10:34.299 "data_size": 63488 00:10:34.299 }, 00:10:34.299 { 00:10:34.299 "name": "BaseBdev2", 00:10:34.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.299 "is_configured": false, 00:10:34.299 "data_offset": 0, 00:10:34.299 "data_size": 0 00:10:34.299 }, 00:10:34.299 { 00:10:34.299 "name": "BaseBdev3", 00:10:34.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.299 "is_configured": false, 00:10:34.299 "data_offset": 0, 00:10:34.299 "data_size": 0 00:10:34.299 } 00:10:34.299 ] 00:10:34.299 }' 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.299 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.865 [2024-11-20 10:33:38.082882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.865 BaseBdev2 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.865 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.865 [ 00:10:34.865 { 00:10:34.865 "name": "BaseBdev2", 00:10:34.865 "aliases": [ 00:10:34.865 "e52a91a3-71d2-4bce-8595-1cb992508d4d" 00:10:34.865 ], 00:10:34.865 "product_name": "Malloc disk", 00:10:34.865 "block_size": 512, 00:10:34.865 "num_blocks": 65536, 00:10:34.865 "uuid": "e52a91a3-71d2-4bce-8595-1cb992508d4d", 00:10:34.865 "assigned_rate_limits": { 00:10:34.865 "rw_ios_per_sec": 0, 00:10:34.865 "rw_mbytes_per_sec": 0, 00:10:34.865 "r_mbytes_per_sec": 0, 00:10:34.865 "w_mbytes_per_sec": 0 00:10:34.865 }, 00:10:34.865 "claimed": true, 00:10:34.865 "claim_type": "exclusive_write", 00:10:34.865 "zoned": false, 00:10:34.865 "supported_io_types": { 00:10:34.865 "read": true, 00:10:34.865 "write": true, 00:10:34.865 "unmap": true, 00:10:34.865 "flush": true, 00:10:34.865 "reset": true, 00:10:34.865 "nvme_admin": false, 00:10:34.865 "nvme_io": false, 00:10:34.865 "nvme_io_md": false, 00:10:34.865 "write_zeroes": true, 00:10:34.865 "zcopy": true, 00:10:34.865 "get_zone_info": false, 00:10:34.865 "zone_management": false, 00:10:34.865 "zone_append": false, 00:10:34.865 "compare": false, 00:10:34.865 "compare_and_write": false, 00:10:34.865 "abort": true, 00:10:34.865 "seek_hole": false, 00:10:34.865 "seek_data": false, 00:10:34.865 "copy": true, 00:10:34.865 "nvme_iov_md": false 00:10:34.865 }, 00:10:34.865 "memory_domains": [ 00:10:34.865 { 00:10:34.865 "dma_device_id": "system", 00:10:34.865 "dma_device_type": 1 00:10:34.865 }, 00:10:34.865 { 00:10:34.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.865 "dma_device_type": 2 00:10:34.865 } 00:10:34.865 ], 00:10:34.866 "driver_specific": {} 00:10:34.866 } 00:10:34.866 ] 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.866 "name": "Existed_Raid", 00:10:34.866 "uuid": "63734101-ed59-4f90-ad0c-7ba94421d80c", 00:10:34.866 "strip_size_kb": 0, 00:10:34.866 "state": "configuring", 00:10:34.866 "raid_level": "raid1", 00:10:34.866 "superblock": true, 00:10:34.866 "num_base_bdevs": 3, 00:10:34.866 "num_base_bdevs_discovered": 2, 00:10:34.866 "num_base_bdevs_operational": 3, 00:10:34.866 "base_bdevs_list": [ 00:10:34.866 { 00:10:34.866 "name": "BaseBdev1", 00:10:34.866 "uuid": "572056cb-85fb-4935-a93e-c99ffdd35e78", 00:10:34.866 "is_configured": true, 00:10:34.866 "data_offset": 2048, 00:10:34.866 "data_size": 63488 00:10:34.866 }, 00:10:34.866 { 00:10:34.866 "name": "BaseBdev2", 00:10:34.866 "uuid": "e52a91a3-71d2-4bce-8595-1cb992508d4d", 00:10:34.866 "is_configured": true, 00:10:34.866 "data_offset": 2048, 00:10:34.866 "data_size": 63488 00:10:34.866 }, 00:10:34.866 { 00:10:34.866 "name": "BaseBdev3", 00:10:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.866 "is_configured": false, 00:10:34.866 "data_offset": 0, 00:10:34.866 "data_size": 0 00:10:34.866 } 00:10:34.866 ] 00:10:34.866 }' 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.866 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.124 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.124 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.124 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.383 [2024-11-20 10:33:38.631530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.383 [2024-11-20 10:33:38.631897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:35.383 [2024-11-20 10:33:38.631958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:35.383 [2024-11-20 10:33:38.632253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:35.383 BaseBdev3 00:10:35.383 [2024-11-20 10:33:38.632484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:35.383 [2024-11-20 10:33:38.632539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:35.383 [2024-11-20 10:33:38.632767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.383 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.384 [ 00:10:35.384 { 00:10:35.384 "name": "BaseBdev3", 00:10:35.384 "aliases": [ 00:10:35.384 "c1986dea-83eb-4b0b-8f4c-d3e44010b960" 00:10:35.384 ], 00:10:35.384 "product_name": "Malloc disk", 00:10:35.384 "block_size": 512, 00:10:35.384 "num_blocks": 65536, 00:10:35.384 "uuid": "c1986dea-83eb-4b0b-8f4c-d3e44010b960", 00:10:35.384 "assigned_rate_limits": { 00:10:35.384 "rw_ios_per_sec": 0, 00:10:35.384 "rw_mbytes_per_sec": 0, 00:10:35.384 "r_mbytes_per_sec": 0, 00:10:35.384 "w_mbytes_per_sec": 0 00:10:35.384 }, 00:10:35.384 "claimed": true, 00:10:35.384 "claim_type": "exclusive_write", 00:10:35.384 "zoned": false, 00:10:35.384 "supported_io_types": { 00:10:35.384 "read": true, 00:10:35.384 "write": true, 00:10:35.384 "unmap": true, 00:10:35.384 "flush": true, 00:10:35.384 "reset": true, 00:10:35.384 "nvme_admin": false, 00:10:35.384 "nvme_io": false, 00:10:35.384 "nvme_io_md": false, 00:10:35.384 "write_zeroes": true, 00:10:35.384 "zcopy": true, 00:10:35.384 "get_zone_info": false, 00:10:35.384 "zone_management": false, 00:10:35.384 "zone_append": false, 00:10:35.384 "compare": false, 00:10:35.384 "compare_and_write": false, 00:10:35.384 "abort": true, 00:10:35.384 "seek_hole": false, 00:10:35.384 "seek_data": false, 00:10:35.384 "copy": true, 00:10:35.384 "nvme_iov_md": false 00:10:35.384 }, 00:10:35.384 "memory_domains": [ 00:10:35.384 { 00:10:35.384 "dma_device_id": "system", 00:10:35.384 "dma_device_type": 1 00:10:35.384 }, 00:10:35.384 { 00:10:35.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.384 "dma_device_type": 2 00:10:35.384 } 00:10:35.384 ], 00:10:35.384 "driver_specific": {} 00:10:35.384 } 00:10:35.384 ] 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.384 "name": "Existed_Raid", 00:10:35.384 "uuid": "63734101-ed59-4f90-ad0c-7ba94421d80c", 00:10:35.384 "strip_size_kb": 0, 00:10:35.384 "state": "online", 00:10:35.384 "raid_level": "raid1", 00:10:35.384 "superblock": true, 00:10:35.384 "num_base_bdevs": 3, 00:10:35.384 "num_base_bdevs_discovered": 3, 00:10:35.384 "num_base_bdevs_operational": 3, 00:10:35.384 "base_bdevs_list": [ 00:10:35.384 { 00:10:35.384 "name": "BaseBdev1", 00:10:35.384 "uuid": "572056cb-85fb-4935-a93e-c99ffdd35e78", 00:10:35.384 "is_configured": true, 00:10:35.384 "data_offset": 2048, 00:10:35.384 "data_size": 63488 00:10:35.384 }, 00:10:35.384 { 00:10:35.384 "name": "BaseBdev2", 00:10:35.384 "uuid": "e52a91a3-71d2-4bce-8595-1cb992508d4d", 00:10:35.384 "is_configured": true, 00:10:35.384 "data_offset": 2048, 00:10:35.384 "data_size": 63488 00:10:35.384 }, 00:10:35.384 { 00:10:35.384 "name": "BaseBdev3", 00:10:35.384 "uuid": "c1986dea-83eb-4b0b-8f4c-d3e44010b960", 00:10:35.384 "is_configured": true, 00:10:35.384 "data_offset": 2048, 00:10:35.384 "data_size": 63488 00:10:35.384 } 00:10:35.384 ] 00:10:35.384 }' 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.384 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.642 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.643 [2024-11-20 10:33:39.107147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.902 "name": "Existed_Raid", 00:10:35.902 "aliases": [ 00:10:35.902 "63734101-ed59-4f90-ad0c-7ba94421d80c" 00:10:35.902 ], 00:10:35.902 "product_name": "Raid Volume", 00:10:35.902 "block_size": 512, 00:10:35.902 "num_blocks": 63488, 00:10:35.902 "uuid": "63734101-ed59-4f90-ad0c-7ba94421d80c", 00:10:35.902 "assigned_rate_limits": { 00:10:35.902 "rw_ios_per_sec": 0, 00:10:35.902 "rw_mbytes_per_sec": 0, 00:10:35.902 "r_mbytes_per_sec": 0, 00:10:35.902 "w_mbytes_per_sec": 0 00:10:35.902 }, 00:10:35.902 "claimed": false, 00:10:35.902 "zoned": false, 00:10:35.902 "supported_io_types": { 00:10:35.902 "read": true, 00:10:35.902 "write": true, 00:10:35.902 "unmap": false, 00:10:35.902 "flush": false, 00:10:35.902 "reset": true, 00:10:35.902 "nvme_admin": false, 00:10:35.902 "nvme_io": false, 00:10:35.902 "nvme_io_md": false, 00:10:35.902 "write_zeroes": true, 00:10:35.902 "zcopy": false, 00:10:35.902 "get_zone_info": false, 00:10:35.902 "zone_management": false, 00:10:35.902 "zone_append": false, 00:10:35.902 "compare": false, 00:10:35.902 "compare_and_write": false, 00:10:35.902 "abort": false, 00:10:35.902 "seek_hole": false, 00:10:35.902 "seek_data": false, 00:10:35.902 "copy": false, 00:10:35.902 "nvme_iov_md": false 00:10:35.902 }, 00:10:35.902 "memory_domains": [ 00:10:35.902 { 00:10:35.902 "dma_device_id": "system", 00:10:35.902 "dma_device_type": 1 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.902 "dma_device_type": 2 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "dma_device_id": "system", 00:10:35.902 "dma_device_type": 1 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.902 "dma_device_type": 2 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "dma_device_id": "system", 00:10:35.902 "dma_device_type": 1 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.902 "dma_device_type": 2 00:10:35.902 } 00:10:35.902 ], 00:10:35.902 "driver_specific": { 00:10:35.902 "raid": { 00:10:35.902 "uuid": "63734101-ed59-4f90-ad0c-7ba94421d80c", 00:10:35.902 "strip_size_kb": 0, 00:10:35.902 "state": "online", 00:10:35.902 "raid_level": "raid1", 00:10:35.902 "superblock": true, 00:10:35.902 "num_base_bdevs": 3, 00:10:35.902 "num_base_bdevs_discovered": 3, 00:10:35.902 "num_base_bdevs_operational": 3, 00:10:35.902 "base_bdevs_list": [ 00:10:35.902 { 00:10:35.902 "name": "BaseBdev1", 00:10:35.902 "uuid": "572056cb-85fb-4935-a93e-c99ffdd35e78", 00:10:35.902 "is_configured": true, 00:10:35.902 "data_offset": 2048, 00:10:35.902 "data_size": 63488 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "name": "BaseBdev2", 00:10:35.902 "uuid": "e52a91a3-71d2-4bce-8595-1cb992508d4d", 00:10:35.902 "is_configured": true, 00:10:35.902 "data_offset": 2048, 00:10:35.902 "data_size": 63488 00:10:35.902 }, 00:10:35.902 { 00:10:35.902 "name": "BaseBdev3", 00:10:35.902 "uuid": "c1986dea-83eb-4b0b-8f4c-d3e44010b960", 00:10:35.902 "is_configured": true, 00:10:35.902 "data_offset": 2048, 00:10:35.902 "data_size": 63488 00:10:35.902 } 00:10:35.902 ] 00:10:35.902 } 00:10:35.902 } 00:10:35.902 }' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:35.902 BaseBdev2 00:10:35.902 BaseBdev3' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.902 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.902 [2024-11-20 10:33:39.366505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.160 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.161 "name": "Existed_Raid", 00:10:36.161 "uuid": "63734101-ed59-4f90-ad0c-7ba94421d80c", 00:10:36.161 "strip_size_kb": 0, 00:10:36.161 "state": "online", 00:10:36.161 "raid_level": "raid1", 00:10:36.161 "superblock": true, 00:10:36.161 "num_base_bdevs": 3, 00:10:36.161 "num_base_bdevs_discovered": 2, 00:10:36.161 "num_base_bdevs_operational": 2, 00:10:36.161 "base_bdevs_list": [ 00:10:36.161 { 00:10:36.161 "name": null, 00:10:36.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.161 "is_configured": false, 00:10:36.161 "data_offset": 0, 00:10:36.161 "data_size": 63488 00:10:36.161 }, 00:10:36.161 { 00:10:36.161 "name": "BaseBdev2", 00:10:36.161 "uuid": "e52a91a3-71d2-4bce-8595-1cb992508d4d", 00:10:36.161 "is_configured": true, 00:10:36.161 "data_offset": 2048, 00:10:36.161 "data_size": 63488 00:10:36.161 }, 00:10:36.161 { 00:10:36.161 "name": "BaseBdev3", 00:10:36.161 "uuid": "c1986dea-83eb-4b0b-8f4c-d3e44010b960", 00:10:36.161 "is_configured": true, 00:10:36.161 "data_offset": 2048, 00:10:36.161 "data_size": 63488 00:10:36.161 } 00:10:36.161 ] 00:10:36.161 }' 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.161 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.726 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:36.726 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.726 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.727 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.727 [2024-11-20 10:33:39.963557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.727 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.727 [2024-11-20 10:33:40.115680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.727 [2024-11-20 10:33:40.115876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.986 [2024-11-20 10:33:40.213126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.986 [2024-11-20 10:33:40.213189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.986 [2024-11-20 10:33:40.213201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 BaseBdev2 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.986 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 [ 00:10:36.987 { 00:10:36.987 "name": "BaseBdev2", 00:10:36.987 "aliases": [ 00:10:36.987 "126d4914-c75f-40f8-9615-a0e603e40a03" 00:10:36.987 ], 00:10:36.987 "product_name": "Malloc disk", 00:10:36.987 "block_size": 512, 00:10:36.987 "num_blocks": 65536, 00:10:36.987 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:36.987 "assigned_rate_limits": { 00:10:36.987 "rw_ios_per_sec": 0, 00:10:36.987 "rw_mbytes_per_sec": 0, 00:10:36.987 "r_mbytes_per_sec": 0, 00:10:36.987 "w_mbytes_per_sec": 0 00:10:36.987 }, 00:10:36.987 "claimed": false, 00:10:36.987 "zoned": false, 00:10:36.987 "supported_io_types": { 00:10:36.987 "read": true, 00:10:36.987 "write": true, 00:10:36.987 "unmap": true, 00:10:36.987 "flush": true, 00:10:36.987 "reset": true, 00:10:36.987 "nvme_admin": false, 00:10:36.987 "nvme_io": false, 00:10:36.987 "nvme_io_md": false, 00:10:36.987 "write_zeroes": true, 00:10:36.987 "zcopy": true, 00:10:36.987 "get_zone_info": false, 00:10:36.987 "zone_management": false, 00:10:36.987 "zone_append": false, 00:10:36.987 "compare": false, 00:10:36.987 "compare_and_write": false, 00:10:36.987 "abort": true, 00:10:36.987 "seek_hole": false, 00:10:36.987 "seek_data": false, 00:10:36.987 "copy": true, 00:10:36.987 "nvme_iov_md": false 00:10:36.987 }, 00:10:36.987 "memory_domains": [ 00:10:36.987 { 00:10:36.987 "dma_device_id": "system", 00:10:36.987 "dma_device_type": 1 00:10:36.987 }, 00:10:36.987 { 00:10:36.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.987 "dma_device_type": 2 00:10:36.987 } 00:10:36.987 ], 00:10:36.987 "driver_specific": {} 00:10:36.987 } 00:10:36.987 ] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 BaseBdev3 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 [ 00:10:36.987 { 00:10:36.987 "name": "BaseBdev3", 00:10:36.987 "aliases": [ 00:10:36.987 "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6" 00:10:36.987 ], 00:10:36.987 "product_name": "Malloc disk", 00:10:36.987 "block_size": 512, 00:10:36.987 "num_blocks": 65536, 00:10:36.987 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:36.987 "assigned_rate_limits": { 00:10:36.987 "rw_ios_per_sec": 0, 00:10:36.987 "rw_mbytes_per_sec": 0, 00:10:36.987 "r_mbytes_per_sec": 0, 00:10:36.987 "w_mbytes_per_sec": 0 00:10:36.987 }, 00:10:36.987 "claimed": false, 00:10:36.987 "zoned": false, 00:10:36.987 "supported_io_types": { 00:10:36.987 "read": true, 00:10:36.987 "write": true, 00:10:36.987 "unmap": true, 00:10:36.987 "flush": true, 00:10:36.987 "reset": true, 00:10:36.987 "nvme_admin": false, 00:10:36.987 "nvme_io": false, 00:10:36.987 "nvme_io_md": false, 00:10:36.987 "write_zeroes": true, 00:10:36.987 "zcopy": true, 00:10:36.987 "get_zone_info": false, 00:10:36.987 "zone_management": false, 00:10:36.987 "zone_append": false, 00:10:36.987 "compare": false, 00:10:36.987 "compare_and_write": false, 00:10:36.987 "abort": true, 00:10:36.987 "seek_hole": false, 00:10:36.987 "seek_data": false, 00:10:36.987 "copy": true, 00:10:36.987 "nvme_iov_md": false 00:10:36.987 }, 00:10:36.987 "memory_domains": [ 00:10:36.987 { 00:10:36.987 "dma_device_id": "system", 00:10:36.987 "dma_device_type": 1 00:10:36.987 }, 00:10:36.987 { 00:10:36.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.987 "dma_device_type": 2 00:10:36.987 } 00:10:36.987 ], 00:10:36.987 "driver_specific": {} 00:10:36.987 } 00:10:36.987 ] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 [2024-11-20 10:33:40.408199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.987 [2024-11-20 10:33:40.408245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.987 [2024-11-20 10:33:40.408267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.987 [2024-11-20 10:33:40.410184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.246 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.246 "name": "Existed_Raid", 00:10:37.246 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:37.246 "strip_size_kb": 0, 00:10:37.246 "state": "configuring", 00:10:37.246 "raid_level": "raid1", 00:10:37.246 "superblock": true, 00:10:37.246 "num_base_bdevs": 3, 00:10:37.246 "num_base_bdevs_discovered": 2, 00:10:37.246 "num_base_bdevs_operational": 3, 00:10:37.246 "base_bdevs_list": [ 00:10:37.246 { 00:10:37.246 "name": "BaseBdev1", 00:10:37.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.246 "is_configured": false, 00:10:37.246 "data_offset": 0, 00:10:37.246 "data_size": 0 00:10:37.246 }, 00:10:37.246 { 00:10:37.246 "name": "BaseBdev2", 00:10:37.246 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:37.246 "is_configured": true, 00:10:37.246 "data_offset": 2048, 00:10:37.246 "data_size": 63488 00:10:37.246 }, 00:10:37.246 { 00:10:37.246 "name": "BaseBdev3", 00:10:37.246 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:37.246 "is_configured": true, 00:10:37.246 "data_offset": 2048, 00:10:37.246 "data_size": 63488 00:10:37.246 } 00:10:37.246 ] 00:10:37.246 }' 00:10:37.246 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.246 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.505 [2024-11-20 10:33:40.851556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.505 "name": "Existed_Raid", 00:10:37.505 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:37.505 "strip_size_kb": 0, 00:10:37.505 "state": "configuring", 00:10:37.505 "raid_level": "raid1", 00:10:37.505 "superblock": true, 00:10:37.505 "num_base_bdevs": 3, 00:10:37.505 "num_base_bdevs_discovered": 1, 00:10:37.505 "num_base_bdevs_operational": 3, 00:10:37.505 "base_bdevs_list": [ 00:10:37.505 { 00:10:37.505 "name": "BaseBdev1", 00:10:37.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.505 "is_configured": false, 00:10:37.505 "data_offset": 0, 00:10:37.505 "data_size": 0 00:10:37.505 }, 00:10:37.505 { 00:10:37.505 "name": null, 00:10:37.505 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:37.505 "is_configured": false, 00:10:37.505 "data_offset": 0, 00:10:37.505 "data_size": 63488 00:10:37.505 }, 00:10:37.505 { 00:10:37.505 "name": "BaseBdev3", 00:10:37.505 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:37.505 "is_configured": true, 00:10:37.505 "data_offset": 2048, 00:10:37.505 "data_size": 63488 00:10:37.505 } 00:10:37.505 ] 00:10:37.505 }' 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.505 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 [2024-11-20 10:33:41.357970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.072 BaseBdev1 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 [ 00:10:38.072 { 00:10:38.072 "name": "BaseBdev1", 00:10:38.072 "aliases": [ 00:10:38.072 "d97ac98f-52a8-46b7-9406-49a6fee62e53" 00:10:38.072 ], 00:10:38.072 "product_name": "Malloc disk", 00:10:38.072 "block_size": 512, 00:10:38.072 "num_blocks": 65536, 00:10:38.072 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:38.072 "assigned_rate_limits": { 00:10:38.072 "rw_ios_per_sec": 0, 00:10:38.072 "rw_mbytes_per_sec": 0, 00:10:38.072 "r_mbytes_per_sec": 0, 00:10:38.072 "w_mbytes_per_sec": 0 00:10:38.072 }, 00:10:38.072 "claimed": true, 00:10:38.072 "claim_type": "exclusive_write", 00:10:38.072 "zoned": false, 00:10:38.072 "supported_io_types": { 00:10:38.072 "read": true, 00:10:38.072 "write": true, 00:10:38.072 "unmap": true, 00:10:38.072 "flush": true, 00:10:38.072 "reset": true, 00:10:38.072 "nvme_admin": false, 00:10:38.072 "nvme_io": false, 00:10:38.072 "nvme_io_md": false, 00:10:38.072 "write_zeroes": true, 00:10:38.072 "zcopy": true, 00:10:38.072 "get_zone_info": false, 00:10:38.072 "zone_management": false, 00:10:38.072 "zone_append": false, 00:10:38.072 "compare": false, 00:10:38.072 "compare_and_write": false, 00:10:38.072 "abort": true, 00:10:38.072 "seek_hole": false, 00:10:38.072 "seek_data": false, 00:10:38.072 "copy": true, 00:10:38.072 "nvme_iov_md": false 00:10:38.072 }, 00:10:38.072 "memory_domains": [ 00:10:38.072 { 00:10:38.072 "dma_device_id": "system", 00:10:38.072 "dma_device_type": 1 00:10:38.072 }, 00:10:38.072 { 00:10:38.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.072 "dma_device_type": 2 00:10:38.072 } 00:10:38.072 ], 00:10:38.072 "driver_specific": {} 00:10:38.072 } 00:10:38.072 ] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.072 "name": "Existed_Raid", 00:10:38.072 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:38.072 "strip_size_kb": 0, 00:10:38.072 "state": "configuring", 00:10:38.072 "raid_level": "raid1", 00:10:38.072 "superblock": true, 00:10:38.072 "num_base_bdevs": 3, 00:10:38.072 "num_base_bdevs_discovered": 2, 00:10:38.072 "num_base_bdevs_operational": 3, 00:10:38.072 "base_bdevs_list": [ 00:10:38.072 { 00:10:38.072 "name": "BaseBdev1", 00:10:38.072 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:38.072 "is_configured": true, 00:10:38.072 "data_offset": 2048, 00:10:38.072 "data_size": 63488 00:10:38.072 }, 00:10:38.072 { 00:10:38.072 "name": null, 00:10:38.072 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:38.072 "is_configured": false, 00:10:38.072 "data_offset": 0, 00:10:38.072 "data_size": 63488 00:10:38.072 }, 00:10:38.072 { 00:10:38.072 "name": "BaseBdev3", 00:10:38.072 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:38.072 "is_configured": true, 00:10:38.072 "data_offset": 2048, 00:10:38.072 "data_size": 63488 00:10:38.072 } 00:10:38.072 ] 00:10:38.072 }' 00:10:38.072 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.073 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.639 [2024-11-20 10:33:41.877199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.639 "name": "Existed_Raid", 00:10:38.639 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:38.639 "strip_size_kb": 0, 00:10:38.639 "state": "configuring", 00:10:38.639 "raid_level": "raid1", 00:10:38.639 "superblock": true, 00:10:38.639 "num_base_bdevs": 3, 00:10:38.639 "num_base_bdevs_discovered": 1, 00:10:38.639 "num_base_bdevs_operational": 3, 00:10:38.639 "base_bdevs_list": [ 00:10:38.639 { 00:10:38.639 "name": "BaseBdev1", 00:10:38.639 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:38.639 "is_configured": true, 00:10:38.639 "data_offset": 2048, 00:10:38.639 "data_size": 63488 00:10:38.639 }, 00:10:38.639 { 00:10:38.639 "name": null, 00:10:38.639 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:38.639 "is_configured": false, 00:10:38.639 "data_offset": 0, 00:10:38.639 "data_size": 63488 00:10:38.639 }, 00:10:38.639 { 00:10:38.639 "name": null, 00:10:38.639 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:38.639 "is_configured": false, 00:10:38.639 "data_offset": 0, 00:10:38.639 "data_size": 63488 00:10:38.639 } 00:10:38.639 ] 00:10:38.639 }' 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.639 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.904 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.904 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.904 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.904 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.163 [2024-11-20 10:33:42.416336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.163 "name": "Existed_Raid", 00:10:39.163 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:39.163 "strip_size_kb": 0, 00:10:39.163 "state": "configuring", 00:10:39.163 "raid_level": "raid1", 00:10:39.163 "superblock": true, 00:10:39.163 "num_base_bdevs": 3, 00:10:39.163 "num_base_bdevs_discovered": 2, 00:10:39.163 "num_base_bdevs_operational": 3, 00:10:39.163 "base_bdevs_list": [ 00:10:39.163 { 00:10:39.163 "name": "BaseBdev1", 00:10:39.163 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:39.163 "is_configured": true, 00:10:39.163 "data_offset": 2048, 00:10:39.163 "data_size": 63488 00:10:39.163 }, 00:10:39.163 { 00:10:39.163 "name": null, 00:10:39.163 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:39.163 "is_configured": false, 00:10:39.163 "data_offset": 0, 00:10:39.163 "data_size": 63488 00:10:39.163 }, 00:10:39.163 { 00:10:39.163 "name": "BaseBdev3", 00:10:39.163 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:39.163 "is_configured": true, 00:10:39.163 "data_offset": 2048, 00:10:39.163 "data_size": 63488 00:10:39.163 } 00:10:39.163 ] 00:10:39.163 }' 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.163 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.421 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.422 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.422 [2024-11-20 10:33:42.855669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.680 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.680 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.680 "name": "Existed_Raid", 00:10:39.680 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:39.680 "strip_size_kb": 0, 00:10:39.680 "state": "configuring", 00:10:39.680 "raid_level": "raid1", 00:10:39.680 "superblock": true, 00:10:39.680 "num_base_bdevs": 3, 00:10:39.680 "num_base_bdevs_discovered": 1, 00:10:39.680 "num_base_bdevs_operational": 3, 00:10:39.680 "base_bdevs_list": [ 00:10:39.680 { 00:10:39.680 "name": null, 00:10:39.680 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:39.680 "is_configured": false, 00:10:39.680 "data_offset": 0, 00:10:39.680 "data_size": 63488 00:10:39.680 }, 00:10:39.680 { 00:10:39.680 "name": null, 00:10:39.680 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:39.680 "is_configured": false, 00:10:39.680 "data_offset": 0, 00:10:39.680 "data_size": 63488 00:10:39.680 }, 00:10:39.680 { 00:10:39.680 "name": "BaseBdev3", 00:10:39.680 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:39.680 "is_configured": true, 00:10:39.680 "data_offset": 2048, 00:10:39.680 "data_size": 63488 00:10:39.680 } 00:10:39.680 ] 00:10:39.680 }' 00:10:39.680 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.680 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.938 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.938 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.938 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.938 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.938 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.195 [2024-11-20 10:33:43.420605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.195 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.195 "name": "Existed_Raid", 00:10:40.195 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:40.195 "strip_size_kb": 0, 00:10:40.195 "state": "configuring", 00:10:40.195 "raid_level": "raid1", 00:10:40.195 "superblock": true, 00:10:40.195 "num_base_bdevs": 3, 00:10:40.195 "num_base_bdevs_discovered": 2, 00:10:40.195 "num_base_bdevs_operational": 3, 00:10:40.195 "base_bdevs_list": [ 00:10:40.195 { 00:10:40.195 "name": null, 00:10:40.195 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:40.195 "is_configured": false, 00:10:40.195 "data_offset": 0, 00:10:40.195 "data_size": 63488 00:10:40.195 }, 00:10:40.195 { 00:10:40.195 "name": "BaseBdev2", 00:10:40.195 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:40.195 "is_configured": true, 00:10:40.195 "data_offset": 2048, 00:10:40.195 "data_size": 63488 00:10:40.195 }, 00:10:40.195 { 00:10:40.195 "name": "BaseBdev3", 00:10:40.195 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:40.196 "is_configured": true, 00:10:40.196 "data_offset": 2048, 00:10:40.196 "data_size": 63488 00:10:40.196 } 00:10:40.196 ] 00:10:40.196 }' 00:10:40.196 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.196 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.454 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d97ac98f-52a8-46b7-9406-49a6fee62e53 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.712 [2024-11-20 10:33:43.992343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.712 [2024-11-20 10:33:43.992729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.712 [2024-11-20 10:33:43.992750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.712 [2024-11-20 10:33:43.993023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:40.712 NewBaseBdev 00:10:40.712 [2024-11-20 10:33:43.993218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.712 [2024-11-20 10:33:43.993238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:40.712 [2024-11-20 10:33:43.993433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.712 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.712 [ 00:10:40.712 { 00:10:40.712 "name": "NewBaseBdev", 00:10:40.712 "aliases": [ 00:10:40.712 "d97ac98f-52a8-46b7-9406-49a6fee62e53" 00:10:40.712 ], 00:10:40.712 "product_name": "Malloc disk", 00:10:40.712 "block_size": 512, 00:10:40.712 "num_blocks": 65536, 00:10:40.712 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:40.712 "assigned_rate_limits": { 00:10:40.712 "rw_ios_per_sec": 0, 00:10:40.712 "rw_mbytes_per_sec": 0, 00:10:40.712 "r_mbytes_per_sec": 0, 00:10:40.712 "w_mbytes_per_sec": 0 00:10:40.712 }, 00:10:40.712 "claimed": true, 00:10:40.712 "claim_type": "exclusive_write", 00:10:40.712 "zoned": false, 00:10:40.712 "supported_io_types": { 00:10:40.712 "read": true, 00:10:40.712 "write": true, 00:10:40.712 "unmap": true, 00:10:40.712 "flush": true, 00:10:40.712 "reset": true, 00:10:40.712 "nvme_admin": false, 00:10:40.712 "nvme_io": false, 00:10:40.712 "nvme_io_md": false, 00:10:40.712 "write_zeroes": true, 00:10:40.712 "zcopy": true, 00:10:40.712 "get_zone_info": false, 00:10:40.712 "zone_management": false, 00:10:40.712 "zone_append": false, 00:10:40.712 "compare": false, 00:10:40.712 "compare_and_write": false, 00:10:40.712 "abort": true, 00:10:40.712 "seek_hole": false, 00:10:40.712 "seek_data": false, 00:10:40.712 "copy": true, 00:10:40.712 "nvme_iov_md": false 00:10:40.712 }, 00:10:40.712 "memory_domains": [ 00:10:40.712 { 00:10:40.712 "dma_device_id": "system", 00:10:40.712 "dma_device_type": 1 00:10:40.712 }, 00:10:40.712 { 00:10:40.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.712 "dma_device_type": 2 00:10:40.712 } 00:10:40.712 ], 00:10:40.712 "driver_specific": {} 00:10:40.712 } 00:10:40.712 ] 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:40.712 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.713 "name": "Existed_Raid", 00:10:40.713 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:40.713 "strip_size_kb": 0, 00:10:40.713 "state": "online", 00:10:40.713 "raid_level": "raid1", 00:10:40.713 "superblock": true, 00:10:40.713 "num_base_bdevs": 3, 00:10:40.713 "num_base_bdevs_discovered": 3, 00:10:40.713 "num_base_bdevs_operational": 3, 00:10:40.713 "base_bdevs_list": [ 00:10:40.713 { 00:10:40.713 "name": "NewBaseBdev", 00:10:40.713 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:40.713 "is_configured": true, 00:10:40.713 "data_offset": 2048, 00:10:40.713 "data_size": 63488 00:10:40.713 }, 00:10:40.713 { 00:10:40.713 "name": "BaseBdev2", 00:10:40.713 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:40.713 "is_configured": true, 00:10:40.713 "data_offset": 2048, 00:10:40.713 "data_size": 63488 00:10:40.713 }, 00:10:40.713 { 00:10:40.713 "name": "BaseBdev3", 00:10:40.713 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:40.713 "is_configured": true, 00:10:40.713 "data_offset": 2048, 00:10:40.713 "data_size": 63488 00:10:40.713 } 00:10:40.713 ] 00:10:40.713 }' 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.713 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.278 [2024-11-20 10:33:44.495949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.278 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.278 "name": "Existed_Raid", 00:10:41.278 "aliases": [ 00:10:41.278 "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88" 00:10:41.278 ], 00:10:41.278 "product_name": "Raid Volume", 00:10:41.278 "block_size": 512, 00:10:41.278 "num_blocks": 63488, 00:10:41.278 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:41.278 "assigned_rate_limits": { 00:10:41.278 "rw_ios_per_sec": 0, 00:10:41.278 "rw_mbytes_per_sec": 0, 00:10:41.278 "r_mbytes_per_sec": 0, 00:10:41.278 "w_mbytes_per_sec": 0 00:10:41.278 }, 00:10:41.278 "claimed": false, 00:10:41.278 "zoned": false, 00:10:41.278 "supported_io_types": { 00:10:41.278 "read": true, 00:10:41.278 "write": true, 00:10:41.278 "unmap": false, 00:10:41.278 "flush": false, 00:10:41.278 "reset": true, 00:10:41.278 "nvme_admin": false, 00:10:41.278 "nvme_io": false, 00:10:41.278 "nvme_io_md": false, 00:10:41.278 "write_zeroes": true, 00:10:41.278 "zcopy": false, 00:10:41.278 "get_zone_info": false, 00:10:41.278 "zone_management": false, 00:10:41.278 "zone_append": false, 00:10:41.278 "compare": false, 00:10:41.278 "compare_and_write": false, 00:10:41.278 "abort": false, 00:10:41.278 "seek_hole": false, 00:10:41.278 "seek_data": false, 00:10:41.278 "copy": false, 00:10:41.278 "nvme_iov_md": false 00:10:41.278 }, 00:10:41.278 "memory_domains": [ 00:10:41.278 { 00:10:41.278 "dma_device_id": "system", 00:10:41.278 "dma_device_type": 1 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.278 "dma_device_type": 2 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "dma_device_id": "system", 00:10:41.278 "dma_device_type": 1 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.278 "dma_device_type": 2 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "dma_device_id": "system", 00:10:41.278 "dma_device_type": 1 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.278 "dma_device_type": 2 00:10:41.278 } 00:10:41.278 ], 00:10:41.278 "driver_specific": { 00:10:41.278 "raid": { 00:10:41.278 "uuid": "aa7ef5d6-a0a7-4916-9fee-98d41ffb6d88", 00:10:41.278 "strip_size_kb": 0, 00:10:41.278 "state": "online", 00:10:41.278 "raid_level": "raid1", 00:10:41.278 "superblock": true, 00:10:41.278 "num_base_bdevs": 3, 00:10:41.278 "num_base_bdevs_discovered": 3, 00:10:41.278 "num_base_bdevs_operational": 3, 00:10:41.278 "base_bdevs_list": [ 00:10:41.278 { 00:10:41.278 "name": "NewBaseBdev", 00:10:41.278 "uuid": "d97ac98f-52a8-46b7-9406-49a6fee62e53", 00:10:41.278 "is_configured": true, 00:10:41.278 "data_offset": 2048, 00:10:41.278 "data_size": 63488 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "name": "BaseBdev2", 00:10:41.278 "uuid": "126d4914-c75f-40f8-9615-a0e603e40a03", 00:10:41.278 "is_configured": true, 00:10:41.278 "data_offset": 2048, 00:10:41.278 "data_size": 63488 00:10:41.278 }, 00:10:41.278 { 00:10:41.278 "name": "BaseBdev3", 00:10:41.278 "uuid": "c38f2ea8-40fe-4ff3-9ec7-73cd2eab65a6", 00:10:41.279 "is_configured": true, 00:10:41.279 "data_offset": 2048, 00:10:41.279 "data_size": 63488 00:10:41.279 } 00:10:41.279 ] 00:10:41.279 } 00:10:41.279 } 00:10:41.279 }' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.279 BaseBdev2 00:10:41.279 BaseBdev3' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.279 [2024-11-20 10:33:44.747237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.279 [2024-11-20 10:33:44.747275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.279 [2024-11-20 10:33:44.747386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.279 [2024-11-20 10:33:44.747709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.279 [2024-11-20 10:33:44.747739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68206 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68206 ']' 00:10:41.279 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68206 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68206 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68206' 00:10:41.560 killing process with pid 68206 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68206 00:10:41.560 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68206 00:10:41.560 [2024-11-20 10:33:44.789296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.818 [2024-11-20 10:33:45.126168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.192 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.192 00:10:43.192 real 0m10.724s 00:10:43.192 user 0m17.005s 00:10:43.192 sys 0m1.837s 00:10:43.192 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.192 ************************************ 00:10:43.192 END TEST raid_state_function_test_sb 00:10:43.192 ************************************ 00:10:43.192 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.192 10:33:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:43.192 10:33:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.192 10:33:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.192 10:33:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.192 ************************************ 00:10:43.192 START TEST raid_superblock_test 00:10:43.192 ************************************ 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68826 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68826 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68826 ']' 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.192 10:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.192 [2024-11-20 10:33:46.471240] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:43.192 [2024-11-20 10:33:46.471405] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68826 ] 00:10:43.192 [2024-11-20 10:33:46.647210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.450 [2024-11-20 10:33:46.772417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.707 [2024-11-20 10:33:46.991117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.707 [2024-11-20 10:33:46.991189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.966 malloc1 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.966 [2024-11-20 10:33:47.372642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.966 [2024-11-20 10:33:47.372777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.966 [2024-11-20 10:33:47.372837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:43.966 [2024-11-20 10:33:47.372872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.966 [2024-11-20 10:33:47.375178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.966 [2024-11-20 10:33:47.375251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.966 pt1 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.966 malloc2 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.966 [2024-11-20 10:33:47.433121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.966 [2024-11-20 10:33:47.433180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.966 [2024-11-20 10:33:47.433203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:43.966 [2024-11-20 10:33:47.433212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.966 [2024-11-20 10:33:47.435318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.966 [2024-11-20 10:33:47.435362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.966 pt2 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.966 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.223 malloc3 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.223 [2024-11-20 10:33:47.503847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.223 [2024-11-20 10:33:47.503973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.223 [2024-11-20 10:33:47.504033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:44.223 [2024-11-20 10:33:47.504072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.223 [2024-11-20 10:33:47.506509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.223 [2024-11-20 10:33:47.506586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.223 pt3 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.223 [2024-11-20 10:33:47.515848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.223 [2024-11-20 10:33:47.517913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.223 [2024-11-20 10:33:47.518029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.223 [2024-11-20 10:33:47.518248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:44.223 [2024-11-20 10:33:47.518304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:44.223 [2024-11-20 10:33:47.518617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:44.223 [2024-11-20 10:33:47.518837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:44.223 [2024-11-20 10:33:47.518887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:44.223 [2024-11-20 10:33:47.519113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.223 "name": "raid_bdev1", 00:10:44.223 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:44.223 "strip_size_kb": 0, 00:10:44.223 "state": "online", 00:10:44.223 "raid_level": "raid1", 00:10:44.223 "superblock": true, 00:10:44.223 "num_base_bdevs": 3, 00:10:44.223 "num_base_bdevs_discovered": 3, 00:10:44.223 "num_base_bdevs_operational": 3, 00:10:44.223 "base_bdevs_list": [ 00:10:44.223 { 00:10:44.223 "name": "pt1", 00:10:44.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.223 "is_configured": true, 00:10:44.223 "data_offset": 2048, 00:10:44.223 "data_size": 63488 00:10:44.223 }, 00:10:44.223 { 00:10:44.223 "name": "pt2", 00:10:44.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.223 "is_configured": true, 00:10:44.223 "data_offset": 2048, 00:10:44.223 "data_size": 63488 00:10:44.223 }, 00:10:44.223 { 00:10:44.223 "name": "pt3", 00:10:44.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.223 "is_configured": true, 00:10:44.223 "data_offset": 2048, 00:10:44.223 "data_size": 63488 00:10:44.223 } 00:10:44.223 ] 00:10:44.223 }' 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.223 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.480 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.481 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.481 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.481 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.481 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.481 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.739 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.739 10:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.739 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.739 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.739 [2024-11-20 10:33:47.967412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.739 10:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.739 "name": "raid_bdev1", 00:10:44.739 "aliases": [ 00:10:44.739 "a992f7ea-38a3-4ab4-a456-604b5cea5e0a" 00:10:44.739 ], 00:10:44.739 "product_name": "Raid Volume", 00:10:44.739 "block_size": 512, 00:10:44.739 "num_blocks": 63488, 00:10:44.739 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:44.739 "assigned_rate_limits": { 00:10:44.739 "rw_ios_per_sec": 0, 00:10:44.739 "rw_mbytes_per_sec": 0, 00:10:44.739 "r_mbytes_per_sec": 0, 00:10:44.739 "w_mbytes_per_sec": 0 00:10:44.739 }, 00:10:44.739 "claimed": false, 00:10:44.739 "zoned": false, 00:10:44.739 "supported_io_types": { 00:10:44.739 "read": true, 00:10:44.739 "write": true, 00:10:44.739 "unmap": false, 00:10:44.739 "flush": false, 00:10:44.739 "reset": true, 00:10:44.739 "nvme_admin": false, 00:10:44.739 "nvme_io": false, 00:10:44.739 "nvme_io_md": false, 00:10:44.739 "write_zeroes": true, 00:10:44.739 "zcopy": false, 00:10:44.739 "get_zone_info": false, 00:10:44.739 "zone_management": false, 00:10:44.739 "zone_append": false, 00:10:44.739 "compare": false, 00:10:44.739 "compare_and_write": false, 00:10:44.739 "abort": false, 00:10:44.739 "seek_hole": false, 00:10:44.739 "seek_data": false, 00:10:44.739 "copy": false, 00:10:44.739 "nvme_iov_md": false 00:10:44.739 }, 00:10:44.739 "memory_domains": [ 00:10:44.739 { 00:10:44.739 "dma_device_id": "system", 00:10:44.739 "dma_device_type": 1 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.739 "dma_device_type": 2 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "dma_device_id": "system", 00:10:44.739 "dma_device_type": 1 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.739 "dma_device_type": 2 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "dma_device_id": "system", 00:10:44.739 "dma_device_type": 1 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.739 "dma_device_type": 2 00:10:44.739 } 00:10:44.739 ], 00:10:44.739 "driver_specific": { 00:10:44.739 "raid": { 00:10:44.739 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:44.739 "strip_size_kb": 0, 00:10:44.739 "state": "online", 00:10:44.739 "raid_level": "raid1", 00:10:44.739 "superblock": true, 00:10:44.739 "num_base_bdevs": 3, 00:10:44.739 "num_base_bdevs_discovered": 3, 00:10:44.739 "num_base_bdevs_operational": 3, 00:10:44.739 "base_bdevs_list": [ 00:10:44.739 { 00:10:44.739 "name": "pt1", 00:10:44.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.739 "is_configured": true, 00:10:44.739 "data_offset": 2048, 00:10:44.739 "data_size": 63488 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "name": "pt2", 00:10:44.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.739 "is_configured": true, 00:10:44.739 "data_offset": 2048, 00:10:44.739 "data_size": 63488 00:10:44.739 }, 00:10:44.739 { 00:10:44.739 "name": "pt3", 00:10:44.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.739 "is_configured": true, 00:10:44.739 "data_offset": 2048, 00:10:44.739 "data_size": 63488 00:10:44.739 } 00:10:44.739 ] 00:10:44.739 } 00:10:44.739 } 00:10:44.739 }' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.739 pt2 00:10:44.739 pt3' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.739 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.740 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.740 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.740 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.740 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 [2024-11-20 10:33:48.242939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a992f7ea-38a3-4ab4-a456-604b5cea5e0a 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a992f7ea-38a3-4ab4-a456-604b5cea5e0a ']' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 [2024-11-20 10:33:48.270552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.998 [2024-11-20 10:33:48.270631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.998 [2024-11-20 10:33:48.270784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.998 [2024-11-20 10:33:48.270905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.998 [2024-11-20 10:33:48.270959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 [2024-11-20 10:33:48.386407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:44.998 [2024-11-20 10:33:48.388511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:44.998 [2024-11-20 10:33:48.388568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:44.998 [2024-11-20 10:33:48.388623] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:44.998 [2024-11-20 10:33:48.388684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:44.998 [2024-11-20 10:33:48.388706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:44.998 [2024-11-20 10:33:48.388725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.998 [2024-11-20 10:33:48.388735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:44.998 request: 00:10:44.998 { 00:10:44.998 "name": "raid_bdev1", 00:10:44.998 "raid_level": "raid1", 00:10:44.998 "base_bdevs": [ 00:10:44.998 "malloc1", 00:10:44.998 "malloc2", 00:10:44.998 "malloc3" 00:10:44.998 ], 00:10:44.998 "superblock": false, 00:10:44.998 "method": "bdev_raid_create", 00:10:44.998 "req_id": 1 00:10:44.998 } 00:10:44.998 Got JSON-RPC error response 00:10:44.998 response: 00:10:44.998 { 00:10:44.998 "code": -17, 00:10:44.998 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:44.998 } 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.998 [2024-11-20 10:33:48.450237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.998 [2024-11-20 10:33:48.450326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.998 [2024-11-20 10:33:48.450354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.998 [2024-11-20 10:33:48.450365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.998 [2024-11-20 10:33:48.452929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.998 [2024-11-20 10:33:48.452993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.998 [2024-11-20 10:33:48.453111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:44.998 [2024-11-20 10:33:48.453173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.998 pt1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.998 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.999 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.999 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.257 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.257 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.257 "name": "raid_bdev1", 00:10:45.257 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:45.257 "strip_size_kb": 0, 00:10:45.257 "state": "configuring", 00:10:45.257 "raid_level": "raid1", 00:10:45.258 "superblock": true, 00:10:45.258 "num_base_bdevs": 3, 00:10:45.258 "num_base_bdevs_discovered": 1, 00:10:45.258 "num_base_bdevs_operational": 3, 00:10:45.258 "base_bdevs_list": [ 00:10:45.258 { 00:10:45.258 "name": "pt1", 00:10:45.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.258 "is_configured": true, 00:10:45.258 "data_offset": 2048, 00:10:45.258 "data_size": 63488 00:10:45.258 }, 00:10:45.258 { 00:10:45.258 "name": null, 00:10:45.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.258 "is_configured": false, 00:10:45.258 "data_offset": 2048, 00:10:45.258 "data_size": 63488 00:10:45.258 }, 00:10:45.258 { 00:10:45.258 "name": null, 00:10:45.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.258 "is_configured": false, 00:10:45.258 "data_offset": 2048, 00:10:45.258 "data_size": 63488 00:10:45.258 } 00:10:45.258 ] 00:10:45.258 }' 00:10:45.258 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.258 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.517 [2024-11-20 10:33:48.925465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.517 [2024-11-20 10:33:48.925531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.517 [2024-11-20 10:33:48.925556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:45.517 [2024-11-20 10:33:48.925567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.517 [2024-11-20 10:33:48.926042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.517 [2024-11-20 10:33:48.926061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.517 [2024-11-20 10:33:48.926153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.517 [2024-11-20 10:33:48.926176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.517 pt2 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.517 [2024-11-20 10:33:48.933451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.517 "name": "raid_bdev1", 00:10:45.517 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:45.517 "strip_size_kb": 0, 00:10:45.517 "state": "configuring", 00:10:45.517 "raid_level": "raid1", 00:10:45.517 "superblock": true, 00:10:45.517 "num_base_bdevs": 3, 00:10:45.517 "num_base_bdevs_discovered": 1, 00:10:45.517 "num_base_bdevs_operational": 3, 00:10:45.517 "base_bdevs_list": [ 00:10:45.517 { 00:10:45.517 "name": "pt1", 00:10:45.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.517 "is_configured": true, 00:10:45.517 "data_offset": 2048, 00:10:45.517 "data_size": 63488 00:10:45.517 }, 00:10:45.517 { 00:10:45.517 "name": null, 00:10:45.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.517 "is_configured": false, 00:10:45.517 "data_offset": 0, 00:10:45.517 "data_size": 63488 00:10:45.517 }, 00:10:45.517 { 00:10:45.517 "name": null, 00:10:45.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.517 "is_configured": false, 00:10:45.517 "data_offset": 2048, 00:10:45.517 "data_size": 63488 00:10:45.517 } 00:10:45.517 ] 00:10:45.517 }' 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.517 10:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.085 [2024-11-20 10:33:49.360723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.085 [2024-11-20 10:33:49.360795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.085 [2024-11-20 10:33:49.360816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:46.085 [2024-11-20 10:33:49.360828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.085 [2024-11-20 10:33:49.361311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.085 [2024-11-20 10:33:49.361333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.085 [2024-11-20 10:33:49.361440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.085 [2024-11-20 10:33:49.361482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.085 pt2 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.085 [2024-11-20 10:33:49.368687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.085 [2024-11-20 10:33:49.368790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.085 [2024-11-20 10:33:49.368818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:46.085 [2024-11-20 10:33:49.368832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.085 [2024-11-20 10:33:49.369239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.085 [2024-11-20 10:33:49.369261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.085 [2024-11-20 10:33:49.369332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.085 [2024-11-20 10:33:49.369367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.085 [2024-11-20 10:33:49.369513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:46.085 [2024-11-20 10:33:49.369533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.085 [2024-11-20 10:33:49.369785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:46.085 [2024-11-20 10:33:49.369971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:46.085 [2024-11-20 10:33:49.369982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:46.085 [2024-11-20 10:33:49.370141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.085 pt3 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.085 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.086 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.086 "name": "raid_bdev1", 00:10:46.086 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:46.086 "strip_size_kb": 0, 00:10:46.086 "state": "online", 00:10:46.086 "raid_level": "raid1", 00:10:46.086 "superblock": true, 00:10:46.086 "num_base_bdevs": 3, 00:10:46.086 "num_base_bdevs_discovered": 3, 00:10:46.086 "num_base_bdevs_operational": 3, 00:10:46.086 "base_bdevs_list": [ 00:10:46.086 { 00:10:46.086 "name": "pt1", 00:10:46.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.086 "is_configured": true, 00:10:46.086 "data_offset": 2048, 00:10:46.086 "data_size": 63488 00:10:46.086 }, 00:10:46.086 { 00:10:46.086 "name": "pt2", 00:10:46.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.086 "is_configured": true, 00:10:46.086 "data_offset": 2048, 00:10:46.086 "data_size": 63488 00:10:46.086 }, 00:10:46.086 { 00:10:46.086 "name": "pt3", 00:10:46.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.086 "is_configured": true, 00:10:46.086 "data_offset": 2048, 00:10:46.086 "data_size": 63488 00:10:46.086 } 00:10:46.086 ] 00:10:46.086 }' 00:10:46.086 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.086 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 [2024-11-20 10:33:49.844276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.652 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.652 "name": "raid_bdev1", 00:10:46.652 "aliases": [ 00:10:46.652 "a992f7ea-38a3-4ab4-a456-604b5cea5e0a" 00:10:46.652 ], 00:10:46.652 "product_name": "Raid Volume", 00:10:46.652 "block_size": 512, 00:10:46.652 "num_blocks": 63488, 00:10:46.652 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:46.652 "assigned_rate_limits": { 00:10:46.652 "rw_ios_per_sec": 0, 00:10:46.652 "rw_mbytes_per_sec": 0, 00:10:46.652 "r_mbytes_per_sec": 0, 00:10:46.652 "w_mbytes_per_sec": 0 00:10:46.652 }, 00:10:46.652 "claimed": false, 00:10:46.652 "zoned": false, 00:10:46.652 "supported_io_types": { 00:10:46.652 "read": true, 00:10:46.652 "write": true, 00:10:46.652 "unmap": false, 00:10:46.652 "flush": false, 00:10:46.652 "reset": true, 00:10:46.652 "nvme_admin": false, 00:10:46.652 "nvme_io": false, 00:10:46.652 "nvme_io_md": false, 00:10:46.652 "write_zeroes": true, 00:10:46.652 "zcopy": false, 00:10:46.652 "get_zone_info": false, 00:10:46.652 "zone_management": false, 00:10:46.652 "zone_append": false, 00:10:46.652 "compare": false, 00:10:46.652 "compare_and_write": false, 00:10:46.652 "abort": false, 00:10:46.652 "seek_hole": false, 00:10:46.652 "seek_data": false, 00:10:46.652 "copy": false, 00:10:46.652 "nvme_iov_md": false 00:10:46.652 }, 00:10:46.652 "memory_domains": [ 00:10:46.652 { 00:10:46.652 "dma_device_id": "system", 00:10:46.652 "dma_device_type": 1 00:10:46.652 }, 00:10:46.652 { 00:10:46.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.652 "dma_device_type": 2 00:10:46.652 }, 00:10:46.652 { 00:10:46.652 "dma_device_id": "system", 00:10:46.652 "dma_device_type": 1 00:10:46.652 }, 00:10:46.652 { 00:10:46.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.652 "dma_device_type": 2 00:10:46.652 }, 00:10:46.652 { 00:10:46.652 "dma_device_id": "system", 00:10:46.652 "dma_device_type": 1 00:10:46.652 }, 00:10:46.652 { 00:10:46.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.652 "dma_device_type": 2 00:10:46.652 } 00:10:46.652 ], 00:10:46.652 "driver_specific": { 00:10:46.652 "raid": { 00:10:46.652 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:46.652 "strip_size_kb": 0, 00:10:46.652 "state": "online", 00:10:46.652 "raid_level": "raid1", 00:10:46.652 "superblock": true, 00:10:46.652 "num_base_bdevs": 3, 00:10:46.652 "num_base_bdevs_discovered": 3, 00:10:46.652 "num_base_bdevs_operational": 3, 00:10:46.652 "base_bdevs_list": [ 00:10:46.652 { 00:10:46.652 "name": "pt1", 00:10:46.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.653 "is_configured": true, 00:10:46.653 "data_offset": 2048, 00:10:46.653 "data_size": 63488 00:10:46.653 }, 00:10:46.653 { 00:10:46.653 "name": "pt2", 00:10:46.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.653 "is_configured": true, 00:10:46.653 "data_offset": 2048, 00:10:46.653 "data_size": 63488 00:10:46.653 }, 00:10:46.653 { 00:10:46.653 "name": "pt3", 00:10:46.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.653 "is_configured": true, 00:10:46.653 "data_offset": 2048, 00:10:46.653 "data_size": 63488 00:10:46.653 } 00:10:46.653 ] 00:10:46.653 } 00:10:46.653 } 00:10:46.653 }' 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.653 pt2 00:10:46.653 pt3' 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.653 10:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.653 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 [2024-11-20 10:33:50.135836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a992f7ea-38a3-4ab4-a456-604b5cea5e0a '!=' a992f7ea-38a3-4ab4-a456-604b5cea5e0a ']' 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 [2024-11-20 10:33:50.167552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.911 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.911 "name": "raid_bdev1", 00:10:46.911 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:46.911 "strip_size_kb": 0, 00:10:46.911 "state": "online", 00:10:46.911 "raid_level": "raid1", 00:10:46.911 "superblock": true, 00:10:46.911 "num_base_bdevs": 3, 00:10:46.911 "num_base_bdevs_discovered": 2, 00:10:46.911 "num_base_bdevs_operational": 2, 00:10:46.911 "base_bdevs_list": [ 00:10:46.911 { 00:10:46.911 "name": null, 00:10:46.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.911 "is_configured": false, 00:10:46.911 "data_offset": 0, 00:10:46.911 "data_size": 63488 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "name": "pt2", 00:10:46.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.911 "is_configured": true, 00:10:46.911 "data_offset": 2048, 00:10:46.911 "data_size": 63488 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "name": "pt3", 00:10:46.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.911 "is_configured": true, 00:10:46.911 "data_offset": 2048, 00:10:46.912 "data_size": 63488 00:10:46.912 } 00:10:46.912 ] 00:10:46.912 }' 00:10:46.912 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.912 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.171 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.171 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.171 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.171 [2024-11-20 10:33:50.638683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.171 [2024-11-20 10:33:50.638712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.171 [2024-11-20 10:33:50.638798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.171 [2024-11-20 10:33:50.638862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.171 [2024-11-20 10:33:50.638878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:47.171 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 [2024-11-20 10:33:50.714488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.430 [2024-11-20 10:33:50.714600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.430 [2024-11-20 10:33:50.714637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:47.430 [2024-11-20 10:33:50.714672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.430 [2024-11-20 10:33:50.717035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.430 [2024-11-20 10:33:50.717130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.430 [2024-11-20 10:33:50.717239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:47.430 [2024-11-20 10:33:50.717317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.430 pt2 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.430 "name": "raid_bdev1", 00:10:47.430 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:47.430 "strip_size_kb": 0, 00:10:47.430 "state": "configuring", 00:10:47.430 "raid_level": "raid1", 00:10:47.430 "superblock": true, 00:10:47.430 "num_base_bdevs": 3, 00:10:47.430 "num_base_bdevs_discovered": 1, 00:10:47.430 "num_base_bdevs_operational": 2, 00:10:47.430 "base_bdevs_list": [ 00:10:47.430 { 00:10:47.430 "name": null, 00:10:47.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.430 "is_configured": false, 00:10:47.430 "data_offset": 2048, 00:10:47.430 "data_size": 63488 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "name": "pt2", 00:10:47.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.430 "is_configured": true, 00:10:47.430 "data_offset": 2048, 00:10:47.430 "data_size": 63488 00:10:47.430 }, 00:10:47.431 { 00:10:47.431 "name": null, 00:10:47.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.431 "is_configured": false, 00:10:47.431 "data_offset": 2048, 00:10:47.431 "data_size": 63488 00:10:47.431 } 00:10:47.431 ] 00:10:47.431 }' 00:10:47.431 10:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.431 10:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.688 [2024-11-20 10:33:51.157789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.688 [2024-11-20 10:33:51.157870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.688 [2024-11-20 10:33:51.157891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:47.688 [2024-11-20 10:33:51.157901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.688 [2024-11-20 10:33:51.158346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.688 [2024-11-20 10:33:51.158381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.688 [2024-11-20 10:33:51.158478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:47.688 [2024-11-20 10:33:51.158509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.688 [2024-11-20 10:33:51.158650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:47.688 [2024-11-20 10:33:51.158666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.688 [2024-11-20 10:33:51.158939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:47.688 [2024-11-20 10:33:51.159115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:47.688 [2024-11-20 10:33:51.159124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:47.688 [2024-11-20 10:33:51.159280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.688 pt3 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.688 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.946 "name": "raid_bdev1", 00:10:47.946 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:47.946 "strip_size_kb": 0, 00:10:47.946 "state": "online", 00:10:47.946 "raid_level": "raid1", 00:10:47.946 "superblock": true, 00:10:47.946 "num_base_bdevs": 3, 00:10:47.946 "num_base_bdevs_discovered": 2, 00:10:47.946 "num_base_bdevs_operational": 2, 00:10:47.946 "base_bdevs_list": [ 00:10:47.946 { 00:10:47.946 "name": null, 00:10:47.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.946 "is_configured": false, 00:10:47.946 "data_offset": 2048, 00:10:47.946 "data_size": 63488 00:10:47.946 }, 00:10:47.946 { 00:10:47.946 "name": "pt2", 00:10:47.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.946 "is_configured": true, 00:10:47.946 "data_offset": 2048, 00:10:47.946 "data_size": 63488 00:10:47.946 }, 00:10:47.946 { 00:10:47.946 "name": "pt3", 00:10:47.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.946 "is_configured": true, 00:10:47.946 "data_offset": 2048, 00:10:47.946 "data_size": 63488 00:10:47.946 } 00:10:47.946 ] 00:10:47.946 }' 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.946 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 [2024-11-20 10:33:51.605050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.203 [2024-11-20 10:33:51.605160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.203 [2024-11-20 10:33:51.605272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.203 [2024-11-20 10:33:51.605375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.203 [2024-11-20 10:33:51.605440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 [2024-11-20 10:33:51.668955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.203 [2024-11-20 10:33:51.669063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.203 [2024-11-20 10:33:51.669089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:48.203 [2024-11-20 10:33:51.669099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.203 [2024-11-20 10:33:51.671575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.203 [2024-11-20 10:33:51.671611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.203 [2024-11-20 10:33:51.671697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:48.203 [2024-11-20 10:33:51.671757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.203 [2024-11-20 10:33:51.671892] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:48.203 [2024-11-20 10:33:51.671904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.203 [2024-11-20 10:33:51.671921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:48.203 [2024-11-20 10:33:51.671977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.203 pt1 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.512 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.512 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.512 "name": "raid_bdev1", 00:10:48.512 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:48.512 "strip_size_kb": 0, 00:10:48.512 "state": "configuring", 00:10:48.512 "raid_level": "raid1", 00:10:48.512 "superblock": true, 00:10:48.512 "num_base_bdevs": 3, 00:10:48.512 "num_base_bdevs_discovered": 1, 00:10:48.512 "num_base_bdevs_operational": 2, 00:10:48.512 "base_bdevs_list": [ 00:10:48.512 { 00:10:48.512 "name": null, 00:10:48.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.512 "is_configured": false, 00:10:48.512 "data_offset": 2048, 00:10:48.512 "data_size": 63488 00:10:48.512 }, 00:10:48.512 { 00:10:48.512 "name": "pt2", 00:10:48.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.512 "is_configured": true, 00:10:48.512 "data_offset": 2048, 00:10:48.512 "data_size": 63488 00:10:48.512 }, 00:10:48.512 { 00:10:48.512 "name": null, 00:10:48.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.512 "is_configured": false, 00:10:48.512 "data_offset": 2048, 00:10:48.512 "data_size": 63488 00:10:48.512 } 00:10:48.512 ] 00:10:48.512 }' 00:10:48.512 10:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.512 10:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 [2024-11-20 10:33:52.116206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.770 [2024-11-20 10:33:52.116320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.770 [2024-11-20 10:33:52.116372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:48.770 [2024-11-20 10:33:52.116409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.770 [2024-11-20 10:33:52.116911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.770 [2024-11-20 10:33:52.116971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.770 [2024-11-20 10:33:52.117092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:48.770 [2024-11-20 10:33:52.117172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.770 [2024-11-20 10:33:52.117387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:48.770 [2024-11-20 10:33:52.117430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.770 [2024-11-20 10:33:52.117724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:48.770 [2024-11-20 10:33:52.117935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:48.770 [2024-11-20 10:33:52.117984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:48.770 [2024-11-20 10:33:52.118173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.770 pt3 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.770 "name": "raid_bdev1", 00:10:48.770 "uuid": "a992f7ea-38a3-4ab4-a456-604b5cea5e0a", 00:10:48.770 "strip_size_kb": 0, 00:10:48.770 "state": "online", 00:10:48.770 "raid_level": "raid1", 00:10:48.770 "superblock": true, 00:10:48.770 "num_base_bdevs": 3, 00:10:48.770 "num_base_bdevs_discovered": 2, 00:10:48.770 "num_base_bdevs_operational": 2, 00:10:48.770 "base_bdevs_list": [ 00:10:48.770 { 00:10:48.770 "name": null, 00:10:48.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.770 "is_configured": false, 00:10:48.770 "data_offset": 2048, 00:10:48.770 "data_size": 63488 00:10:48.770 }, 00:10:48.770 { 00:10:48.770 "name": "pt2", 00:10:48.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.770 "is_configured": true, 00:10:48.770 "data_offset": 2048, 00:10:48.770 "data_size": 63488 00:10:48.770 }, 00:10:48.770 { 00:10:48.770 "name": "pt3", 00:10:48.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.770 "is_configured": true, 00:10:48.770 "data_offset": 2048, 00:10:48.770 "data_size": 63488 00:10:48.770 } 00:10:48.770 ] 00:10:48.770 }' 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.770 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.337 [2024-11-20 10:33:52.647692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a992f7ea-38a3-4ab4-a456-604b5cea5e0a '!=' a992f7ea-38a3-4ab4-a456-604b5cea5e0a ']' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68826 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68826 ']' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68826 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68826 00:10:49.337 killing process with pid 68826 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68826' 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68826 00:10:49.337 10:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68826 00:10:49.337 [2024-11-20 10:33:52.733455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.337 [2024-11-20 10:33:52.733556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.337 [2024-11-20 10:33:52.733640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.337 [2024-11-20 10:33:52.733659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:49.595 [2024-11-20 10:33:53.052560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.968 10:33:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:50.968 ************************************ 00:10:50.968 END TEST raid_superblock_test 00:10:50.968 ************************************ 00:10:50.968 00:10:50.968 real 0m7.843s 00:10:50.968 user 0m12.327s 00:10:50.968 sys 0m1.338s 00:10:50.968 10:33:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.968 10:33:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.968 10:33:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:50.968 10:33:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.968 10:33:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.968 10:33:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.968 ************************************ 00:10:50.968 START TEST raid_read_error_test 00:10:50.968 ************************************ 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.968 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cbQGXq1uu4 00:10:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69272 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69272 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69272 ']' 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.969 10:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.969 [2024-11-20 10:33:54.395357] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:50.969 [2024-11-20 10:33:54.395479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69272 ] 00:10:51.227 [2024-11-20 10:33:54.569388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.227 [2024-11-20 10:33:54.689303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.484 [2024-11-20 10:33:54.893116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.484 [2024-11-20 10:33:54.893174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 BaseBdev1_malloc 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 true 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 [2024-11-20 10:33:55.359489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:52.050 [2024-11-20 10:33:55.359556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.050 [2024-11-20 10:33:55.359581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:52.050 [2024-11-20 10:33:55.359593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.050 [2024-11-20 10:33:55.362036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.050 [2024-11-20 10:33:55.362081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:52.050 BaseBdev1 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 BaseBdev2_malloc 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 true 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.050 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 [2024-11-20 10:33:55.430107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:52.050 [2024-11-20 10:33:55.430175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.050 [2024-11-20 10:33:55.430195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:52.050 [2024-11-20 10:33:55.430206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.050 [2024-11-20 10:33:55.432570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.051 [2024-11-20 10:33:55.432615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:52.051 BaseBdev2 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.051 BaseBdev3_malloc 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.051 true 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.051 [2024-11-20 10:33:55.509365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:52.051 [2024-11-20 10:33:55.509437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.051 [2024-11-20 10:33:55.509474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:52.051 [2024-11-20 10:33:55.509485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.051 [2024-11-20 10:33:55.511743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.051 [2024-11-20 10:33:55.511782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:52.051 BaseBdev3 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.051 [2024-11-20 10:33:55.521393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.051 [2024-11-20 10:33:55.523305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.051 [2024-11-20 10:33:55.523434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.051 [2024-11-20 10:33:55.523676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.051 [2024-11-20 10:33:55.523723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.051 [2024-11-20 10:33:55.524008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:52.051 [2024-11-20 10:33:55.524221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.051 [2024-11-20 10:33:55.524267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:52.051 [2024-11-20 10:33:55.524469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.308 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.308 "name": "raid_bdev1", 00:10:52.308 "uuid": "dd049f1b-cfe3-4ced-a870-3eef85a77462", 00:10:52.308 "strip_size_kb": 0, 00:10:52.308 "state": "online", 00:10:52.308 "raid_level": "raid1", 00:10:52.308 "superblock": true, 00:10:52.308 "num_base_bdevs": 3, 00:10:52.308 "num_base_bdevs_discovered": 3, 00:10:52.308 "num_base_bdevs_operational": 3, 00:10:52.308 "base_bdevs_list": [ 00:10:52.308 { 00:10:52.308 "name": "BaseBdev1", 00:10:52.308 "uuid": "c24655e2-b008-579e-9955-b956c9d54d64", 00:10:52.308 "is_configured": true, 00:10:52.308 "data_offset": 2048, 00:10:52.308 "data_size": 63488 00:10:52.308 }, 00:10:52.308 { 00:10:52.308 "name": "BaseBdev2", 00:10:52.308 "uuid": "11c69d9d-9b8d-5cf0-b3d9-7c75a44bac0c", 00:10:52.308 "is_configured": true, 00:10:52.308 "data_offset": 2048, 00:10:52.308 "data_size": 63488 00:10:52.308 }, 00:10:52.308 { 00:10:52.308 "name": "BaseBdev3", 00:10:52.308 "uuid": "03c161eb-da60-5c05-a435-5aac43c4acc6", 00:10:52.308 "is_configured": true, 00:10:52.308 "data_offset": 2048, 00:10:52.308 "data_size": 63488 00:10:52.308 } 00:10:52.308 ] 00:10:52.308 }' 00:10:52.309 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.309 10:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.566 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.566 10:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.566 [2024-11-20 10:33:56.034126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 10:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.532 10:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.532 "name": "raid_bdev1", 00:10:53.532 "uuid": "dd049f1b-cfe3-4ced-a870-3eef85a77462", 00:10:53.532 "strip_size_kb": 0, 00:10:53.532 "state": "online", 00:10:53.532 "raid_level": "raid1", 00:10:53.532 "superblock": true, 00:10:53.532 "num_base_bdevs": 3, 00:10:53.532 "num_base_bdevs_discovered": 3, 00:10:53.532 "num_base_bdevs_operational": 3, 00:10:53.532 "base_bdevs_list": [ 00:10:53.532 { 00:10:53.532 "name": "BaseBdev1", 00:10:53.532 "uuid": "c24655e2-b008-579e-9955-b956c9d54d64", 00:10:53.532 "is_configured": true, 00:10:53.532 "data_offset": 2048, 00:10:53.532 "data_size": 63488 00:10:53.532 }, 00:10:53.532 { 00:10:53.532 "name": "BaseBdev2", 00:10:53.532 "uuid": "11c69d9d-9b8d-5cf0-b3d9-7c75a44bac0c", 00:10:53.532 "is_configured": true, 00:10:53.532 "data_offset": 2048, 00:10:53.532 "data_size": 63488 00:10:53.532 }, 00:10:53.532 { 00:10:53.532 "name": "BaseBdev3", 00:10:53.532 "uuid": "03c161eb-da60-5c05-a435-5aac43c4acc6", 00:10:53.532 "is_configured": true, 00:10:53.532 "data_offset": 2048, 00:10:53.532 "data_size": 63488 00:10:53.532 } 00:10:53.532 ] 00:10:53.532 }' 00:10:53.532 10:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.532 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.097 10:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.097 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.097 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.097 [2024-11-20 10:33:57.425607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.097 [2024-11-20 10:33:57.425637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.097 [2024-11-20 10:33:57.428596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.097 [2024-11-20 10:33:57.428711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.097 [2024-11-20 10:33:57.428834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.097 [2024-11-20 10:33:57.428846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:54.097 { 00:10:54.097 "results": [ 00:10:54.097 { 00:10:54.097 "job": "raid_bdev1", 00:10:54.097 "core_mask": "0x1", 00:10:54.097 "workload": "randrw", 00:10:54.097 "percentage": 50, 00:10:54.097 "status": "finished", 00:10:54.097 "queue_depth": 1, 00:10:54.097 "io_size": 131072, 00:10:54.097 "runtime": 1.392145, 00:10:54.097 "iops": 12828.40508711377, 00:10:54.097 "mibps": 1603.5506358892212, 00:10:54.097 "io_failed": 0, 00:10:54.097 "io_timeout": 0, 00:10:54.097 "avg_latency_us": 75.16212583236322, 00:10:54.097 "min_latency_us": 23.475982532751093, 00:10:54.098 "max_latency_us": 1774.3371179039302 00:10:54.098 } 00:10:54.098 ], 00:10:54.098 "core_count": 1 00:10:54.098 } 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69272 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69272 ']' 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69272 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69272 00:10:54.098 killing process with pid 69272 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69272' 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69272 00:10:54.098 10:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69272 00:10:54.098 [2024-11-20 10:33:57.472318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.355 [2024-11-20 10:33:57.714929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cbQGXq1uu4 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:55.731 ************************************ 00:10:55.731 END TEST raid_read_error_test 00:10:55.731 ************************************ 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:55.731 00:10:55.731 real 0m4.627s 00:10:55.731 user 0m5.558s 00:10:55.731 sys 0m0.556s 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.731 10:33:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 10:33:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:55.731 10:33:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.731 10:33:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.731 10:33:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 ************************************ 00:10:55.731 START TEST raid_write_error_test 00:10:55.731 ************************************ 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CRGN7kjSQU 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69413 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69413 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69413 ']' 00:10:55.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 10:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:55.731 [2024-11-20 10:33:59.069183] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:55.731 [2024-11-20 10:33:59.069309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69413 ] 00:10:55.990 [2024-11-20 10:33:59.226361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.990 [2024-11-20 10:33:59.345152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.250 [2024-11-20 10:33:59.550014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.250 [2024-11-20 10:33:59.550065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.526 BaseBdev1_malloc 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.526 true 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.526 [2024-11-20 10:33:59.989970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:56.526 [2024-11-20 10:33:59.990045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.526 [2024-11-20 10:33:59.990078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:56.526 [2024-11-20 10:33:59.990096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.526 [2024-11-20 10:33:59.992422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.526 [2024-11-20 10:33:59.992471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.526 BaseBdev1 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.526 10:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 BaseBdev2_malloc 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 true 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 [2024-11-20 10:34:00.050108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:56.784 [2024-11-20 10:34:00.050177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.784 [2024-11-20 10:34:00.050206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:56.784 [2024-11-20 10:34:00.050224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.784 [2024-11-20 10:34:00.052437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.784 [2024-11-20 10:34:00.052540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:56.784 BaseBdev2 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 BaseBdev3_malloc 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 true 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 [2024-11-20 10:34:00.126271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:56.784 [2024-11-20 10:34:00.126347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.784 [2024-11-20 10:34:00.126403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:56.784 [2024-11-20 10:34:00.126422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.784 [2024-11-20 10:34:00.128984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.784 [2024-11-20 10:34:00.129035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:56.784 BaseBdev3 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 [2024-11-20 10:34:00.138331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.784 [2024-11-20 10:34:00.140325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.784 [2024-11-20 10:34:00.140518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.784 [2024-11-20 10:34:00.140774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:56.784 [2024-11-20 10:34:00.140791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.784 [2024-11-20 10:34:00.141114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:56.784 [2024-11-20 10:34:00.141310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:56.784 [2024-11-20 10:34:00.141324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:56.784 [2024-11-20 10:34:00.141540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.784 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.784 "name": "raid_bdev1", 00:10:56.784 "uuid": "2d80475f-b06d-4f5d-b794-62b6aa2b6a2e", 00:10:56.784 "strip_size_kb": 0, 00:10:56.784 "state": "online", 00:10:56.784 "raid_level": "raid1", 00:10:56.784 "superblock": true, 00:10:56.784 "num_base_bdevs": 3, 00:10:56.784 "num_base_bdevs_discovered": 3, 00:10:56.784 "num_base_bdevs_operational": 3, 00:10:56.784 "base_bdevs_list": [ 00:10:56.784 { 00:10:56.784 "name": "BaseBdev1", 00:10:56.784 "uuid": "14c42fd1-bbd6-52ee-a19e-8b6365e7b737", 00:10:56.784 "is_configured": true, 00:10:56.784 "data_offset": 2048, 00:10:56.784 "data_size": 63488 00:10:56.784 }, 00:10:56.784 { 00:10:56.784 "name": "BaseBdev2", 00:10:56.784 "uuid": "6abb7b16-afb0-5882-b102-4c53cf91e132", 00:10:56.784 "is_configured": true, 00:10:56.784 "data_offset": 2048, 00:10:56.784 "data_size": 63488 00:10:56.784 }, 00:10:56.784 { 00:10:56.785 "name": "BaseBdev3", 00:10:56.785 "uuid": "086b8b94-d9be-5583-947a-4e24f7e5da81", 00:10:56.785 "is_configured": true, 00:10:56.785 "data_offset": 2048, 00:10:56.785 "data_size": 63488 00:10:56.785 } 00:10:56.785 ] 00:10:56.785 }' 00:10:56.785 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.785 10:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.351 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:57.351 10:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:57.351 [2024-11-20 10:34:00.654689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.284 [2024-11-20 10:34:01.569974] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:58.284 [2024-11-20 10:34:01.570040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.284 [2024-11-20 10:34:01.570310] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.284 "name": "raid_bdev1", 00:10:58.284 "uuid": "2d80475f-b06d-4f5d-b794-62b6aa2b6a2e", 00:10:58.284 "strip_size_kb": 0, 00:10:58.284 "state": "online", 00:10:58.284 "raid_level": "raid1", 00:10:58.284 "superblock": true, 00:10:58.284 "num_base_bdevs": 3, 00:10:58.284 "num_base_bdevs_discovered": 2, 00:10:58.284 "num_base_bdevs_operational": 2, 00:10:58.284 "base_bdevs_list": [ 00:10:58.284 { 00:10:58.284 "name": null, 00:10:58.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.284 "is_configured": false, 00:10:58.284 "data_offset": 0, 00:10:58.284 "data_size": 63488 00:10:58.284 }, 00:10:58.284 { 00:10:58.284 "name": "BaseBdev2", 00:10:58.284 "uuid": "6abb7b16-afb0-5882-b102-4c53cf91e132", 00:10:58.284 "is_configured": true, 00:10:58.284 "data_offset": 2048, 00:10:58.284 "data_size": 63488 00:10:58.284 }, 00:10:58.284 { 00:10:58.284 "name": "BaseBdev3", 00:10:58.284 "uuid": "086b8b94-d9be-5583-947a-4e24f7e5da81", 00:10:58.284 "is_configured": true, 00:10:58.284 "data_offset": 2048, 00:10:58.284 "data_size": 63488 00:10:58.284 } 00:10:58.284 ] 00:10:58.284 }' 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.284 10:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.542 10:34:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.542 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.542 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.543 [2024-11-20 10:34:02.012515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.543 [2024-11-20 10:34:02.012620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.543 [2024-11-20 10:34:02.015330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.543 [2024-11-20 10:34:02.015411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.543 [2024-11-20 10:34:02.015499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.543 [2024-11-20 10:34:02.015513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:58.543 { 00:10:58.543 "results": [ 00:10:58.543 { 00:10:58.543 "job": "raid_bdev1", 00:10:58.543 "core_mask": "0x1", 00:10:58.543 "workload": "randrw", 00:10:58.543 "percentage": 50, 00:10:58.543 "status": "finished", 00:10:58.543 "queue_depth": 1, 00:10:58.543 "io_size": 131072, 00:10:58.543 "runtime": 1.358719, 00:10:58.543 "iops": 13906.481031029964, 00:10:58.543 "mibps": 1738.3101288787454, 00:10:58.543 "io_failed": 0, 00:10:58.543 "io_timeout": 0, 00:10:58.543 "avg_latency_us": 68.93475157472172, 00:10:58.543 "min_latency_us": 24.258515283842794, 00:10:58.543 "max_latency_us": 1395.1441048034935 00:10:58.543 } 00:10:58.543 ], 00:10:58.543 "core_count": 1 00:10:58.543 } 00:10:58.543 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.543 10:34:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69413 00:10:58.543 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69413 ']' 00:10:58.543 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69413 00:10:58.543 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69413 00:10:58.801 killing process with pid 69413 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69413' 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69413 00:10:58.801 10:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69413 00:10:58.801 [2024-11-20 10:34:02.044531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.059 [2024-11-20 10:34:02.284238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CRGN7kjSQU 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:59.993 00:10:59.993 real 0m4.484s 00:10:59.993 user 0m5.315s 00:10:59.993 sys 0m0.506s 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.993 ************************************ 00:10:59.993 END TEST raid_write_error_test 00:10:59.993 ************************************ 00:10:59.993 10:34:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.252 10:34:03 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:00.252 10:34:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:00.252 10:34:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:00.252 10:34:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.252 10:34:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.252 10:34:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.252 ************************************ 00:11:00.252 START TEST raid_state_function_test 00:11:00.252 ************************************ 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:00.252 Process raid pid: 69562 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69562 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69562' 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69562 00:11:00.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69562 ']' 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.252 10:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.252 [2024-11-20 10:34:03.616257] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:00.252 [2024-11-20 10:34:03.616473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.510 [2024-11-20 10:34:03.783163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.510 [2024-11-20 10:34:03.898599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.768 [2024-11-20 10:34:04.103088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.768 [2024-11-20 10:34:04.103142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.026 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.026 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.027 [2024-11-20 10:34:04.449254] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.027 [2024-11-20 10:34:04.449324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.027 [2024-11-20 10:34:04.449342] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.027 [2024-11-20 10:34:04.449370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.027 [2024-11-20 10:34:04.449382] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.027 [2024-11-20 10:34:04.449398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.027 [2024-11-20 10:34:04.449410] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.027 [2024-11-20 10:34:04.449426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.027 "name": "Existed_Raid", 00:11:01.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.027 "strip_size_kb": 64, 00:11:01.027 "state": "configuring", 00:11:01.027 "raid_level": "raid0", 00:11:01.027 "superblock": false, 00:11:01.027 "num_base_bdevs": 4, 00:11:01.027 "num_base_bdevs_discovered": 0, 00:11:01.027 "num_base_bdevs_operational": 4, 00:11:01.027 "base_bdevs_list": [ 00:11:01.027 { 00:11:01.027 "name": "BaseBdev1", 00:11:01.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.027 "is_configured": false, 00:11:01.027 "data_offset": 0, 00:11:01.027 "data_size": 0 00:11:01.027 }, 00:11:01.027 { 00:11:01.027 "name": "BaseBdev2", 00:11:01.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.027 "is_configured": false, 00:11:01.027 "data_offset": 0, 00:11:01.027 "data_size": 0 00:11:01.027 }, 00:11:01.027 { 00:11:01.027 "name": "BaseBdev3", 00:11:01.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.027 "is_configured": false, 00:11:01.027 "data_offset": 0, 00:11:01.027 "data_size": 0 00:11:01.027 }, 00:11:01.027 { 00:11:01.027 "name": "BaseBdev4", 00:11:01.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.027 "is_configured": false, 00:11:01.027 "data_offset": 0, 00:11:01.027 "data_size": 0 00:11:01.027 } 00:11:01.027 ] 00:11:01.027 }' 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.027 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.596 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.596 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.596 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.596 [2024-11-20 10:34:04.860631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.597 [2024-11-20 10:34:04.860676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.597 [2024-11-20 10:34:04.868610] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.597 [2024-11-20 10:34:04.868663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.597 [2024-11-20 10:34:04.868678] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.597 [2024-11-20 10:34:04.868696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.597 [2024-11-20 10:34:04.868707] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.597 [2024-11-20 10:34:04.868722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.597 [2024-11-20 10:34:04.868734] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.597 [2024-11-20 10:34:04.868750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.597 [2024-11-20 10:34:04.914860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.597 BaseBdev1 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.597 [ 00:11:01.597 { 00:11:01.597 "name": "BaseBdev1", 00:11:01.597 "aliases": [ 00:11:01.597 "cc658763-f05b-4a3d-8106-cc4d7751c454" 00:11:01.597 ], 00:11:01.597 "product_name": "Malloc disk", 00:11:01.597 "block_size": 512, 00:11:01.597 "num_blocks": 65536, 00:11:01.597 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:01.597 "assigned_rate_limits": { 00:11:01.597 "rw_ios_per_sec": 0, 00:11:01.597 "rw_mbytes_per_sec": 0, 00:11:01.597 "r_mbytes_per_sec": 0, 00:11:01.597 "w_mbytes_per_sec": 0 00:11:01.597 }, 00:11:01.597 "claimed": true, 00:11:01.597 "claim_type": "exclusive_write", 00:11:01.597 "zoned": false, 00:11:01.597 "supported_io_types": { 00:11:01.597 "read": true, 00:11:01.597 "write": true, 00:11:01.597 "unmap": true, 00:11:01.597 "flush": true, 00:11:01.597 "reset": true, 00:11:01.597 "nvme_admin": false, 00:11:01.597 "nvme_io": false, 00:11:01.597 "nvme_io_md": false, 00:11:01.597 "write_zeroes": true, 00:11:01.597 "zcopy": true, 00:11:01.597 "get_zone_info": false, 00:11:01.597 "zone_management": false, 00:11:01.597 "zone_append": false, 00:11:01.597 "compare": false, 00:11:01.597 "compare_and_write": false, 00:11:01.597 "abort": true, 00:11:01.597 "seek_hole": false, 00:11:01.597 "seek_data": false, 00:11:01.597 "copy": true, 00:11:01.597 "nvme_iov_md": false 00:11:01.597 }, 00:11:01.597 "memory_domains": [ 00:11:01.597 { 00:11:01.597 "dma_device_id": "system", 00:11:01.597 "dma_device_type": 1 00:11:01.597 }, 00:11:01.597 { 00:11:01.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.597 "dma_device_type": 2 00:11:01.597 } 00:11:01.597 ], 00:11:01.597 "driver_specific": {} 00:11:01.597 } 00:11:01.597 ] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.597 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.597 "name": "Existed_Raid", 00:11:01.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.597 "strip_size_kb": 64, 00:11:01.597 "state": "configuring", 00:11:01.597 "raid_level": "raid0", 00:11:01.597 "superblock": false, 00:11:01.597 "num_base_bdevs": 4, 00:11:01.597 "num_base_bdevs_discovered": 1, 00:11:01.597 "num_base_bdevs_operational": 4, 00:11:01.597 "base_bdevs_list": [ 00:11:01.597 { 00:11:01.597 "name": "BaseBdev1", 00:11:01.597 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:01.597 "is_configured": true, 00:11:01.597 "data_offset": 0, 00:11:01.597 "data_size": 65536 00:11:01.597 }, 00:11:01.597 { 00:11:01.597 "name": "BaseBdev2", 00:11:01.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.598 "is_configured": false, 00:11:01.598 "data_offset": 0, 00:11:01.598 "data_size": 0 00:11:01.598 }, 00:11:01.598 { 00:11:01.598 "name": "BaseBdev3", 00:11:01.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.598 "is_configured": false, 00:11:01.598 "data_offset": 0, 00:11:01.598 "data_size": 0 00:11:01.598 }, 00:11:01.598 { 00:11:01.598 "name": "BaseBdev4", 00:11:01.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.598 "is_configured": false, 00:11:01.598 "data_offset": 0, 00:11:01.598 "data_size": 0 00:11:01.598 } 00:11:01.598 ] 00:11:01.598 }' 00:11:01.598 10:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.598 10:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.165 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.165 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.165 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.165 [2024-11-20 10:34:05.390234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.166 [2024-11-20 10:34:05.390367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.166 [2024-11-20 10:34:05.398284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.166 [2024-11-20 10:34:05.400223] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.166 [2024-11-20 10:34:05.400328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.166 [2024-11-20 10:34:05.400386] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.166 [2024-11-20 10:34:05.400423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.166 [2024-11-20 10:34:05.400452] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.166 [2024-11-20 10:34:05.400522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.166 "name": "Existed_Raid", 00:11:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.166 "strip_size_kb": 64, 00:11:02.166 "state": "configuring", 00:11:02.166 "raid_level": "raid0", 00:11:02.166 "superblock": false, 00:11:02.166 "num_base_bdevs": 4, 00:11:02.166 "num_base_bdevs_discovered": 1, 00:11:02.166 "num_base_bdevs_operational": 4, 00:11:02.166 "base_bdevs_list": [ 00:11:02.166 { 00:11:02.166 "name": "BaseBdev1", 00:11:02.166 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:02.166 "is_configured": true, 00:11:02.166 "data_offset": 0, 00:11:02.166 "data_size": 65536 00:11:02.166 }, 00:11:02.166 { 00:11:02.166 "name": "BaseBdev2", 00:11:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.166 "is_configured": false, 00:11:02.166 "data_offset": 0, 00:11:02.166 "data_size": 0 00:11:02.166 }, 00:11:02.166 { 00:11:02.166 "name": "BaseBdev3", 00:11:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.166 "is_configured": false, 00:11:02.166 "data_offset": 0, 00:11:02.166 "data_size": 0 00:11:02.166 }, 00:11:02.166 { 00:11:02.166 "name": "BaseBdev4", 00:11:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.166 "is_configured": false, 00:11:02.166 "data_offset": 0, 00:11:02.166 "data_size": 0 00:11:02.166 } 00:11:02.166 ] 00:11:02.166 }' 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.166 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.425 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.425 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.425 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.425 [2024-11-20 10:34:05.900715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.425 BaseBdev2 00:11:02.425 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.684 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.684 [ 00:11:02.684 { 00:11:02.684 "name": "BaseBdev2", 00:11:02.684 "aliases": [ 00:11:02.684 "23f25565-0468-4e22-83b1-95a06f1faf1c" 00:11:02.684 ], 00:11:02.684 "product_name": "Malloc disk", 00:11:02.684 "block_size": 512, 00:11:02.684 "num_blocks": 65536, 00:11:02.684 "uuid": "23f25565-0468-4e22-83b1-95a06f1faf1c", 00:11:02.684 "assigned_rate_limits": { 00:11:02.684 "rw_ios_per_sec": 0, 00:11:02.684 "rw_mbytes_per_sec": 0, 00:11:02.684 "r_mbytes_per_sec": 0, 00:11:02.684 "w_mbytes_per_sec": 0 00:11:02.684 }, 00:11:02.684 "claimed": true, 00:11:02.685 "claim_type": "exclusive_write", 00:11:02.685 "zoned": false, 00:11:02.685 "supported_io_types": { 00:11:02.685 "read": true, 00:11:02.685 "write": true, 00:11:02.685 "unmap": true, 00:11:02.685 "flush": true, 00:11:02.685 "reset": true, 00:11:02.685 "nvme_admin": false, 00:11:02.685 "nvme_io": false, 00:11:02.685 "nvme_io_md": false, 00:11:02.685 "write_zeroes": true, 00:11:02.685 "zcopy": true, 00:11:02.685 "get_zone_info": false, 00:11:02.685 "zone_management": false, 00:11:02.685 "zone_append": false, 00:11:02.685 "compare": false, 00:11:02.685 "compare_and_write": false, 00:11:02.685 "abort": true, 00:11:02.685 "seek_hole": false, 00:11:02.685 "seek_data": false, 00:11:02.685 "copy": true, 00:11:02.685 "nvme_iov_md": false 00:11:02.685 }, 00:11:02.685 "memory_domains": [ 00:11:02.685 { 00:11:02.685 "dma_device_id": "system", 00:11:02.685 "dma_device_type": 1 00:11:02.685 }, 00:11:02.685 { 00:11:02.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.685 "dma_device_type": 2 00:11:02.685 } 00:11:02.685 ], 00:11:02.685 "driver_specific": {} 00:11:02.685 } 00:11:02.685 ] 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.685 "name": "Existed_Raid", 00:11:02.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.685 "strip_size_kb": 64, 00:11:02.685 "state": "configuring", 00:11:02.685 "raid_level": "raid0", 00:11:02.685 "superblock": false, 00:11:02.685 "num_base_bdevs": 4, 00:11:02.685 "num_base_bdevs_discovered": 2, 00:11:02.685 "num_base_bdevs_operational": 4, 00:11:02.685 "base_bdevs_list": [ 00:11:02.685 { 00:11:02.685 "name": "BaseBdev1", 00:11:02.685 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:02.685 "is_configured": true, 00:11:02.685 "data_offset": 0, 00:11:02.685 "data_size": 65536 00:11:02.685 }, 00:11:02.685 { 00:11:02.685 "name": "BaseBdev2", 00:11:02.685 "uuid": "23f25565-0468-4e22-83b1-95a06f1faf1c", 00:11:02.685 "is_configured": true, 00:11:02.685 "data_offset": 0, 00:11:02.685 "data_size": 65536 00:11:02.685 }, 00:11:02.685 { 00:11:02.685 "name": "BaseBdev3", 00:11:02.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.685 "is_configured": false, 00:11:02.685 "data_offset": 0, 00:11:02.685 "data_size": 0 00:11:02.685 }, 00:11:02.685 { 00:11:02.685 "name": "BaseBdev4", 00:11:02.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.685 "is_configured": false, 00:11:02.685 "data_offset": 0, 00:11:02.685 "data_size": 0 00:11:02.685 } 00:11:02.685 ] 00:11:02.685 }' 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.685 10:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.944 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 [2024-11-20 10:34:06.438760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.203 BaseBdev3 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 [ 00:11:03.203 { 00:11:03.203 "name": "BaseBdev3", 00:11:03.203 "aliases": [ 00:11:03.203 "afc8e23c-d39d-49ad-a3c3-f9bee2068335" 00:11:03.203 ], 00:11:03.203 "product_name": "Malloc disk", 00:11:03.203 "block_size": 512, 00:11:03.203 "num_blocks": 65536, 00:11:03.203 "uuid": "afc8e23c-d39d-49ad-a3c3-f9bee2068335", 00:11:03.203 "assigned_rate_limits": { 00:11:03.203 "rw_ios_per_sec": 0, 00:11:03.203 "rw_mbytes_per_sec": 0, 00:11:03.203 "r_mbytes_per_sec": 0, 00:11:03.203 "w_mbytes_per_sec": 0 00:11:03.203 }, 00:11:03.203 "claimed": true, 00:11:03.203 "claim_type": "exclusive_write", 00:11:03.203 "zoned": false, 00:11:03.203 "supported_io_types": { 00:11:03.203 "read": true, 00:11:03.203 "write": true, 00:11:03.203 "unmap": true, 00:11:03.203 "flush": true, 00:11:03.203 "reset": true, 00:11:03.203 "nvme_admin": false, 00:11:03.203 "nvme_io": false, 00:11:03.203 "nvme_io_md": false, 00:11:03.203 "write_zeroes": true, 00:11:03.203 "zcopy": true, 00:11:03.203 "get_zone_info": false, 00:11:03.203 "zone_management": false, 00:11:03.203 "zone_append": false, 00:11:03.203 "compare": false, 00:11:03.203 "compare_and_write": false, 00:11:03.203 "abort": true, 00:11:03.203 "seek_hole": false, 00:11:03.203 "seek_data": false, 00:11:03.203 "copy": true, 00:11:03.203 "nvme_iov_md": false 00:11:03.203 }, 00:11:03.203 "memory_domains": [ 00:11:03.203 { 00:11:03.203 "dma_device_id": "system", 00:11:03.203 "dma_device_type": 1 00:11:03.203 }, 00:11:03.203 { 00:11:03.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.203 "dma_device_type": 2 00:11:03.203 } 00:11:03.203 ], 00:11:03.203 "driver_specific": {} 00:11:03.203 } 00:11:03.203 ] 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.203 "name": "Existed_Raid", 00:11:03.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.203 "strip_size_kb": 64, 00:11:03.203 "state": "configuring", 00:11:03.203 "raid_level": "raid0", 00:11:03.203 "superblock": false, 00:11:03.203 "num_base_bdevs": 4, 00:11:03.203 "num_base_bdevs_discovered": 3, 00:11:03.203 "num_base_bdevs_operational": 4, 00:11:03.203 "base_bdevs_list": [ 00:11:03.203 { 00:11:03.204 "name": "BaseBdev1", 00:11:03.204 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 0, 00:11:03.204 "data_size": 65536 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "BaseBdev2", 00:11:03.204 "uuid": "23f25565-0468-4e22-83b1-95a06f1faf1c", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 0, 00:11:03.204 "data_size": 65536 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "BaseBdev3", 00:11:03.204 "uuid": "afc8e23c-d39d-49ad-a3c3-f9bee2068335", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 0, 00:11:03.204 "data_size": 65536 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "BaseBdev4", 00:11:03.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.204 "is_configured": false, 00:11:03.204 "data_offset": 0, 00:11:03.204 "data_size": 0 00:11:03.204 } 00:11:03.204 ] 00:11:03.204 }' 00:11:03.204 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.204 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.463 [2024-11-20 10:34:06.915874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.463 [2024-11-20 10:34:06.916034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:03.463 [2024-11-20 10:34:06.916066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:03.463 [2024-11-20 10:34:06.916392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.463 [2024-11-20 10:34:06.916661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:03.463 [2024-11-20 10:34:06.916719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:03.463 [2024-11-20 10:34:06.917060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.463 BaseBdev4 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.463 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.463 [ 00:11:03.463 { 00:11:03.463 "name": "BaseBdev4", 00:11:03.463 "aliases": [ 00:11:03.463 "74eb5822-909f-4a05-89ad-8a628f2eb9fd" 00:11:03.463 ], 00:11:03.463 "product_name": "Malloc disk", 00:11:03.463 "block_size": 512, 00:11:03.463 "num_blocks": 65536, 00:11:03.463 "uuid": "74eb5822-909f-4a05-89ad-8a628f2eb9fd", 00:11:03.463 "assigned_rate_limits": { 00:11:03.463 "rw_ios_per_sec": 0, 00:11:03.463 "rw_mbytes_per_sec": 0, 00:11:03.463 "r_mbytes_per_sec": 0, 00:11:03.463 "w_mbytes_per_sec": 0 00:11:03.463 }, 00:11:03.463 "claimed": true, 00:11:03.463 "claim_type": "exclusive_write", 00:11:03.463 "zoned": false, 00:11:03.463 "supported_io_types": { 00:11:03.463 "read": true, 00:11:03.463 "write": true, 00:11:03.463 "unmap": true, 00:11:03.464 "flush": true, 00:11:03.464 "reset": true, 00:11:03.464 "nvme_admin": false, 00:11:03.464 "nvme_io": false, 00:11:03.464 "nvme_io_md": false, 00:11:03.464 "write_zeroes": true, 00:11:03.464 "zcopy": true, 00:11:03.464 "get_zone_info": false, 00:11:03.464 "zone_management": false, 00:11:03.464 "zone_append": false, 00:11:03.464 "compare": false, 00:11:03.464 "compare_and_write": false, 00:11:03.464 "abort": true, 00:11:03.464 "seek_hole": false, 00:11:03.464 "seek_data": false, 00:11:03.464 "copy": true, 00:11:03.464 "nvme_iov_md": false 00:11:03.464 }, 00:11:03.464 "memory_domains": [ 00:11:03.464 { 00:11:03.723 "dma_device_id": "system", 00:11:03.723 "dma_device_type": 1 00:11:03.723 }, 00:11:03.723 { 00:11:03.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.723 "dma_device_type": 2 00:11:03.723 } 00:11:03.723 ], 00:11:03.723 "driver_specific": {} 00:11:03.723 } 00:11:03.723 ] 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.723 "name": "Existed_Raid", 00:11:03.723 "uuid": "27cf1a6a-c09c-4c9c-af5f-e0412594bd32", 00:11:03.723 "strip_size_kb": 64, 00:11:03.723 "state": "online", 00:11:03.723 "raid_level": "raid0", 00:11:03.723 "superblock": false, 00:11:03.723 "num_base_bdevs": 4, 00:11:03.723 "num_base_bdevs_discovered": 4, 00:11:03.723 "num_base_bdevs_operational": 4, 00:11:03.723 "base_bdevs_list": [ 00:11:03.723 { 00:11:03.723 "name": "BaseBdev1", 00:11:03.723 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:03.723 "is_configured": true, 00:11:03.723 "data_offset": 0, 00:11:03.723 "data_size": 65536 00:11:03.723 }, 00:11:03.723 { 00:11:03.723 "name": "BaseBdev2", 00:11:03.723 "uuid": "23f25565-0468-4e22-83b1-95a06f1faf1c", 00:11:03.723 "is_configured": true, 00:11:03.723 "data_offset": 0, 00:11:03.723 "data_size": 65536 00:11:03.723 }, 00:11:03.723 { 00:11:03.723 "name": "BaseBdev3", 00:11:03.723 "uuid": "afc8e23c-d39d-49ad-a3c3-f9bee2068335", 00:11:03.723 "is_configured": true, 00:11:03.723 "data_offset": 0, 00:11:03.723 "data_size": 65536 00:11:03.723 }, 00:11:03.723 { 00:11:03.723 "name": "BaseBdev4", 00:11:03.723 "uuid": "74eb5822-909f-4a05-89ad-8a628f2eb9fd", 00:11:03.723 "is_configured": true, 00:11:03.723 "data_offset": 0, 00:11:03.723 "data_size": 65536 00:11:03.723 } 00:11:03.723 ] 00:11:03.723 }' 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.723 10:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.982 [2024-11-20 10:34:07.427482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.982 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.240 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.240 "name": "Existed_Raid", 00:11:04.240 "aliases": [ 00:11:04.240 "27cf1a6a-c09c-4c9c-af5f-e0412594bd32" 00:11:04.240 ], 00:11:04.240 "product_name": "Raid Volume", 00:11:04.240 "block_size": 512, 00:11:04.240 "num_blocks": 262144, 00:11:04.240 "uuid": "27cf1a6a-c09c-4c9c-af5f-e0412594bd32", 00:11:04.240 "assigned_rate_limits": { 00:11:04.240 "rw_ios_per_sec": 0, 00:11:04.240 "rw_mbytes_per_sec": 0, 00:11:04.240 "r_mbytes_per_sec": 0, 00:11:04.240 "w_mbytes_per_sec": 0 00:11:04.240 }, 00:11:04.241 "claimed": false, 00:11:04.241 "zoned": false, 00:11:04.241 "supported_io_types": { 00:11:04.241 "read": true, 00:11:04.241 "write": true, 00:11:04.241 "unmap": true, 00:11:04.241 "flush": true, 00:11:04.241 "reset": true, 00:11:04.241 "nvme_admin": false, 00:11:04.241 "nvme_io": false, 00:11:04.241 "nvme_io_md": false, 00:11:04.241 "write_zeroes": true, 00:11:04.241 "zcopy": false, 00:11:04.241 "get_zone_info": false, 00:11:04.241 "zone_management": false, 00:11:04.241 "zone_append": false, 00:11:04.241 "compare": false, 00:11:04.241 "compare_and_write": false, 00:11:04.241 "abort": false, 00:11:04.241 "seek_hole": false, 00:11:04.241 "seek_data": false, 00:11:04.241 "copy": false, 00:11:04.241 "nvme_iov_md": false 00:11:04.241 }, 00:11:04.241 "memory_domains": [ 00:11:04.241 { 00:11:04.241 "dma_device_id": "system", 00:11:04.241 "dma_device_type": 1 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.241 "dma_device_type": 2 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "system", 00:11:04.241 "dma_device_type": 1 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.241 "dma_device_type": 2 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "system", 00:11:04.241 "dma_device_type": 1 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.241 "dma_device_type": 2 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "system", 00:11:04.241 "dma_device_type": 1 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.241 "dma_device_type": 2 00:11:04.241 } 00:11:04.241 ], 00:11:04.241 "driver_specific": { 00:11:04.241 "raid": { 00:11:04.241 "uuid": "27cf1a6a-c09c-4c9c-af5f-e0412594bd32", 00:11:04.241 "strip_size_kb": 64, 00:11:04.241 "state": "online", 00:11:04.241 "raid_level": "raid0", 00:11:04.241 "superblock": false, 00:11:04.241 "num_base_bdevs": 4, 00:11:04.241 "num_base_bdevs_discovered": 4, 00:11:04.241 "num_base_bdevs_operational": 4, 00:11:04.241 "base_bdevs_list": [ 00:11:04.241 { 00:11:04.241 "name": "BaseBdev1", 00:11:04.241 "uuid": "cc658763-f05b-4a3d-8106-cc4d7751c454", 00:11:04.241 "is_configured": true, 00:11:04.241 "data_offset": 0, 00:11:04.241 "data_size": 65536 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "name": "BaseBdev2", 00:11:04.241 "uuid": "23f25565-0468-4e22-83b1-95a06f1faf1c", 00:11:04.241 "is_configured": true, 00:11:04.241 "data_offset": 0, 00:11:04.241 "data_size": 65536 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "name": "BaseBdev3", 00:11:04.241 "uuid": "afc8e23c-d39d-49ad-a3c3-f9bee2068335", 00:11:04.241 "is_configured": true, 00:11:04.241 "data_offset": 0, 00:11:04.241 "data_size": 65536 00:11:04.241 }, 00:11:04.241 { 00:11:04.241 "name": "BaseBdev4", 00:11:04.241 "uuid": "74eb5822-909f-4a05-89ad-8a628f2eb9fd", 00:11:04.241 "is_configured": true, 00:11:04.241 "data_offset": 0, 00:11:04.241 "data_size": 65536 00:11:04.241 } 00:11:04.241 ] 00:11:04.241 } 00:11:04.241 } 00:11:04.241 }' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:04.241 BaseBdev2 00:11:04.241 BaseBdev3 00:11:04.241 BaseBdev4' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.241 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.501 [2024-11-20 10:34:07.774619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.501 [2024-11-20 10:34:07.774707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.501 [2024-11-20 10:34:07.774815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.501 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.501 "name": "Existed_Raid", 00:11:04.501 "uuid": "27cf1a6a-c09c-4c9c-af5f-e0412594bd32", 00:11:04.501 "strip_size_kb": 64, 00:11:04.501 "state": "offline", 00:11:04.501 "raid_level": "raid0", 00:11:04.501 "superblock": false, 00:11:04.501 "num_base_bdevs": 4, 00:11:04.501 "num_base_bdevs_discovered": 3, 00:11:04.501 "num_base_bdevs_operational": 3, 00:11:04.501 "base_bdevs_list": [ 00:11:04.501 { 00:11:04.501 "name": null, 00:11:04.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.501 "is_configured": false, 00:11:04.501 "data_offset": 0, 00:11:04.501 "data_size": 65536 00:11:04.501 }, 00:11:04.501 { 00:11:04.501 "name": "BaseBdev2", 00:11:04.501 "uuid": "23f25565-0468-4e22-83b1-95a06f1faf1c", 00:11:04.501 "is_configured": true, 00:11:04.501 "data_offset": 0, 00:11:04.501 "data_size": 65536 00:11:04.501 }, 00:11:04.501 { 00:11:04.501 "name": "BaseBdev3", 00:11:04.501 "uuid": "afc8e23c-d39d-49ad-a3c3-f9bee2068335", 00:11:04.501 "is_configured": true, 00:11:04.501 "data_offset": 0, 00:11:04.501 "data_size": 65536 00:11:04.501 }, 00:11:04.501 { 00:11:04.501 "name": "BaseBdev4", 00:11:04.502 "uuid": "74eb5822-909f-4a05-89ad-8a628f2eb9fd", 00:11:04.502 "is_configured": true, 00:11:04.502 "data_offset": 0, 00:11:04.502 "data_size": 65536 00:11:04.502 } 00:11:04.502 ] 00:11:04.502 }' 00:11:04.502 10:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.502 10:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.069 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.070 [2024-11-20 10:34:08.319055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.070 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.070 [2024-11-20 10:34:08.465607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 [2024-11-20 10:34:08.615042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:05.328 [2024-11-20 10:34:08.615167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 BaseBdev2 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 [ 00:11:05.587 { 00:11:05.587 "name": "BaseBdev2", 00:11:05.587 "aliases": [ 00:11:05.587 "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73" 00:11:05.587 ], 00:11:05.587 "product_name": "Malloc disk", 00:11:05.587 "block_size": 512, 00:11:05.587 "num_blocks": 65536, 00:11:05.587 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:05.587 "assigned_rate_limits": { 00:11:05.587 "rw_ios_per_sec": 0, 00:11:05.587 "rw_mbytes_per_sec": 0, 00:11:05.587 "r_mbytes_per_sec": 0, 00:11:05.587 "w_mbytes_per_sec": 0 00:11:05.587 }, 00:11:05.587 "claimed": false, 00:11:05.587 "zoned": false, 00:11:05.587 "supported_io_types": { 00:11:05.587 "read": true, 00:11:05.587 "write": true, 00:11:05.587 "unmap": true, 00:11:05.587 "flush": true, 00:11:05.587 "reset": true, 00:11:05.587 "nvme_admin": false, 00:11:05.587 "nvme_io": false, 00:11:05.587 "nvme_io_md": false, 00:11:05.587 "write_zeroes": true, 00:11:05.587 "zcopy": true, 00:11:05.587 "get_zone_info": false, 00:11:05.587 "zone_management": false, 00:11:05.587 "zone_append": false, 00:11:05.587 "compare": false, 00:11:05.587 "compare_and_write": false, 00:11:05.587 "abort": true, 00:11:05.587 "seek_hole": false, 00:11:05.587 "seek_data": false, 00:11:05.587 "copy": true, 00:11:05.587 "nvme_iov_md": false 00:11:05.587 }, 00:11:05.587 "memory_domains": [ 00:11:05.587 { 00:11:05.587 "dma_device_id": "system", 00:11:05.587 "dma_device_type": 1 00:11:05.587 }, 00:11:05.587 { 00:11:05.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.587 "dma_device_type": 2 00:11:05.587 } 00:11:05.587 ], 00:11:05.587 "driver_specific": {} 00:11:05.587 } 00:11:05.587 ] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 BaseBdev3 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 [ 00:11:05.587 { 00:11:05.587 "name": "BaseBdev3", 00:11:05.587 "aliases": [ 00:11:05.587 "7c3d8331-e133-47fa-a302-d19426089d5f" 00:11:05.587 ], 00:11:05.587 "product_name": "Malloc disk", 00:11:05.587 "block_size": 512, 00:11:05.587 "num_blocks": 65536, 00:11:05.587 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:05.587 "assigned_rate_limits": { 00:11:05.587 "rw_ios_per_sec": 0, 00:11:05.587 "rw_mbytes_per_sec": 0, 00:11:05.587 "r_mbytes_per_sec": 0, 00:11:05.587 "w_mbytes_per_sec": 0 00:11:05.587 }, 00:11:05.587 "claimed": false, 00:11:05.587 "zoned": false, 00:11:05.587 "supported_io_types": { 00:11:05.587 "read": true, 00:11:05.587 "write": true, 00:11:05.587 "unmap": true, 00:11:05.587 "flush": true, 00:11:05.587 "reset": true, 00:11:05.587 "nvme_admin": false, 00:11:05.587 "nvme_io": false, 00:11:05.587 "nvme_io_md": false, 00:11:05.587 "write_zeroes": true, 00:11:05.587 "zcopy": true, 00:11:05.587 "get_zone_info": false, 00:11:05.587 "zone_management": false, 00:11:05.587 "zone_append": false, 00:11:05.587 "compare": false, 00:11:05.587 "compare_and_write": false, 00:11:05.587 "abort": true, 00:11:05.587 "seek_hole": false, 00:11:05.587 "seek_data": false, 00:11:05.587 "copy": true, 00:11:05.587 "nvme_iov_md": false 00:11:05.587 }, 00:11:05.587 "memory_domains": [ 00:11:05.587 { 00:11:05.587 "dma_device_id": "system", 00:11:05.587 "dma_device_type": 1 00:11:05.587 }, 00:11:05.587 { 00:11:05.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.587 "dma_device_type": 2 00:11:05.587 } 00:11:05.587 ], 00:11:05.587 "driver_specific": {} 00:11:05.587 } 00:11:05.587 ] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 BaseBdev4 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.587 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 [ 00:11:05.588 { 00:11:05.588 "name": "BaseBdev4", 00:11:05.588 "aliases": [ 00:11:05.588 "2c0d1710-0093-4e14-be9b-220558f5ea6a" 00:11:05.588 ], 00:11:05.588 "product_name": "Malloc disk", 00:11:05.588 "block_size": 512, 00:11:05.588 "num_blocks": 65536, 00:11:05.588 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:05.588 "assigned_rate_limits": { 00:11:05.588 "rw_ios_per_sec": 0, 00:11:05.588 "rw_mbytes_per_sec": 0, 00:11:05.588 "r_mbytes_per_sec": 0, 00:11:05.588 "w_mbytes_per_sec": 0 00:11:05.588 }, 00:11:05.588 "claimed": false, 00:11:05.588 "zoned": false, 00:11:05.588 "supported_io_types": { 00:11:05.588 "read": true, 00:11:05.588 "write": true, 00:11:05.588 "unmap": true, 00:11:05.588 "flush": true, 00:11:05.588 "reset": true, 00:11:05.588 "nvme_admin": false, 00:11:05.588 "nvme_io": false, 00:11:05.588 "nvme_io_md": false, 00:11:05.588 "write_zeroes": true, 00:11:05.588 "zcopy": true, 00:11:05.588 "get_zone_info": false, 00:11:05.588 "zone_management": false, 00:11:05.588 "zone_append": false, 00:11:05.588 "compare": false, 00:11:05.588 "compare_and_write": false, 00:11:05.588 "abort": true, 00:11:05.588 "seek_hole": false, 00:11:05.588 "seek_data": false, 00:11:05.588 "copy": true, 00:11:05.588 "nvme_iov_md": false 00:11:05.588 }, 00:11:05.588 "memory_domains": [ 00:11:05.588 { 00:11:05.588 "dma_device_id": "system", 00:11:05.588 "dma_device_type": 1 00:11:05.588 }, 00:11:05.588 { 00:11:05.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.588 "dma_device_type": 2 00:11:05.588 } 00:11:05.588 ], 00:11:05.588 "driver_specific": {} 00:11:05.588 } 00:11:05.588 ] 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 [2024-11-20 10:34:08.993322] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.588 [2024-11-20 10:34:08.993447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.588 [2024-11-20 10:34:08.993510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.588 [2024-11-20 10:34:08.995427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.588 [2024-11-20 10:34:08.995536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.588 10:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.588 "name": "Existed_Raid", 00:11:05.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.588 "strip_size_kb": 64, 00:11:05.588 "state": "configuring", 00:11:05.588 "raid_level": "raid0", 00:11:05.588 "superblock": false, 00:11:05.588 "num_base_bdevs": 4, 00:11:05.588 "num_base_bdevs_discovered": 3, 00:11:05.588 "num_base_bdevs_operational": 4, 00:11:05.588 "base_bdevs_list": [ 00:11:05.588 { 00:11:05.588 "name": "BaseBdev1", 00:11:05.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.588 "is_configured": false, 00:11:05.588 "data_offset": 0, 00:11:05.588 "data_size": 0 00:11:05.588 }, 00:11:05.588 { 00:11:05.588 "name": "BaseBdev2", 00:11:05.588 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:05.588 "is_configured": true, 00:11:05.588 "data_offset": 0, 00:11:05.588 "data_size": 65536 00:11:05.588 }, 00:11:05.588 { 00:11:05.588 "name": "BaseBdev3", 00:11:05.588 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:05.588 "is_configured": true, 00:11:05.588 "data_offset": 0, 00:11:05.588 "data_size": 65536 00:11:05.588 }, 00:11:05.588 { 00:11:05.588 "name": "BaseBdev4", 00:11:05.588 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:05.588 "is_configured": true, 00:11:05.588 "data_offset": 0, 00:11:05.588 "data_size": 65536 00:11:05.588 } 00:11:05.588 ] 00:11:05.588 }' 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.588 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.156 [2024-11-20 10:34:09.384677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.156 "name": "Existed_Raid", 00:11:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.156 "strip_size_kb": 64, 00:11:06.156 "state": "configuring", 00:11:06.156 "raid_level": "raid0", 00:11:06.156 "superblock": false, 00:11:06.156 "num_base_bdevs": 4, 00:11:06.156 "num_base_bdevs_discovered": 2, 00:11:06.156 "num_base_bdevs_operational": 4, 00:11:06.156 "base_bdevs_list": [ 00:11:06.156 { 00:11:06.156 "name": "BaseBdev1", 00:11:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.156 "is_configured": false, 00:11:06.156 "data_offset": 0, 00:11:06.156 "data_size": 0 00:11:06.156 }, 00:11:06.156 { 00:11:06.156 "name": null, 00:11:06.156 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:06.156 "is_configured": false, 00:11:06.156 "data_offset": 0, 00:11:06.156 "data_size": 65536 00:11:06.156 }, 00:11:06.156 { 00:11:06.156 "name": "BaseBdev3", 00:11:06.156 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:06.156 "is_configured": true, 00:11:06.156 "data_offset": 0, 00:11:06.156 "data_size": 65536 00:11:06.156 }, 00:11:06.156 { 00:11:06.156 "name": "BaseBdev4", 00:11:06.156 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:06.156 "is_configured": true, 00:11:06.156 "data_offset": 0, 00:11:06.156 "data_size": 65536 00:11:06.156 } 00:11:06.156 ] 00:11:06.156 }' 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.156 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.415 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:06.415 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.415 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.415 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.415 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.674 [2024-11-20 10:34:09.942112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.674 BaseBdev1 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.674 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.675 [ 00:11:06.675 { 00:11:06.675 "name": "BaseBdev1", 00:11:06.675 "aliases": [ 00:11:06.675 "2b23b757-e7fe-4516-b9ad-173febe2b067" 00:11:06.675 ], 00:11:06.675 "product_name": "Malloc disk", 00:11:06.675 "block_size": 512, 00:11:06.675 "num_blocks": 65536, 00:11:06.675 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:06.675 "assigned_rate_limits": { 00:11:06.675 "rw_ios_per_sec": 0, 00:11:06.675 "rw_mbytes_per_sec": 0, 00:11:06.675 "r_mbytes_per_sec": 0, 00:11:06.675 "w_mbytes_per_sec": 0 00:11:06.675 }, 00:11:06.675 "claimed": true, 00:11:06.675 "claim_type": "exclusive_write", 00:11:06.675 "zoned": false, 00:11:06.675 "supported_io_types": { 00:11:06.675 "read": true, 00:11:06.675 "write": true, 00:11:06.675 "unmap": true, 00:11:06.675 "flush": true, 00:11:06.675 "reset": true, 00:11:06.675 "nvme_admin": false, 00:11:06.675 "nvme_io": false, 00:11:06.675 "nvme_io_md": false, 00:11:06.675 "write_zeroes": true, 00:11:06.675 "zcopy": true, 00:11:06.675 "get_zone_info": false, 00:11:06.675 "zone_management": false, 00:11:06.675 "zone_append": false, 00:11:06.675 "compare": false, 00:11:06.675 "compare_and_write": false, 00:11:06.675 "abort": true, 00:11:06.675 "seek_hole": false, 00:11:06.675 "seek_data": false, 00:11:06.675 "copy": true, 00:11:06.675 "nvme_iov_md": false 00:11:06.675 }, 00:11:06.675 "memory_domains": [ 00:11:06.675 { 00:11:06.675 "dma_device_id": "system", 00:11:06.675 "dma_device_type": 1 00:11:06.675 }, 00:11:06.675 { 00:11:06.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.675 "dma_device_type": 2 00:11:06.675 } 00:11:06.675 ], 00:11:06.675 "driver_specific": {} 00:11:06.675 } 00:11:06.675 ] 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.675 10:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.675 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.675 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.675 "name": "Existed_Raid", 00:11:06.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.675 "strip_size_kb": 64, 00:11:06.675 "state": "configuring", 00:11:06.675 "raid_level": "raid0", 00:11:06.675 "superblock": false, 00:11:06.675 "num_base_bdevs": 4, 00:11:06.675 "num_base_bdevs_discovered": 3, 00:11:06.675 "num_base_bdevs_operational": 4, 00:11:06.675 "base_bdevs_list": [ 00:11:06.675 { 00:11:06.675 "name": "BaseBdev1", 00:11:06.675 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:06.675 "is_configured": true, 00:11:06.675 "data_offset": 0, 00:11:06.675 "data_size": 65536 00:11:06.675 }, 00:11:06.675 { 00:11:06.675 "name": null, 00:11:06.675 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:06.675 "is_configured": false, 00:11:06.675 "data_offset": 0, 00:11:06.675 "data_size": 65536 00:11:06.675 }, 00:11:06.675 { 00:11:06.675 "name": "BaseBdev3", 00:11:06.675 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:06.675 "is_configured": true, 00:11:06.675 "data_offset": 0, 00:11:06.675 "data_size": 65536 00:11:06.675 }, 00:11:06.675 { 00:11:06.675 "name": "BaseBdev4", 00:11:06.675 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:06.675 "is_configured": true, 00:11:06.675 "data_offset": 0, 00:11:06.675 "data_size": 65536 00:11:06.675 } 00:11:06.675 ] 00:11:06.675 }' 00:11:06.675 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.675 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.246 [2024-11-20 10:34:10.497383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.246 "name": "Existed_Raid", 00:11:07.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.246 "strip_size_kb": 64, 00:11:07.246 "state": "configuring", 00:11:07.246 "raid_level": "raid0", 00:11:07.246 "superblock": false, 00:11:07.246 "num_base_bdevs": 4, 00:11:07.246 "num_base_bdevs_discovered": 2, 00:11:07.246 "num_base_bdevs_operational": 4, 00:11:07.246 "base_bdevs_list": [ 00:11:07.246 { 00:11:07.246 "name": "BaseBdev1", 00:11:07.246 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:07.246 "is_configured": true, 00:11:07.246 "data_offset": 0, 00:11:07.246 "data_size": 65536 00:11:07.246 }, 00:11:07.246 { 00:11:07.246 "name": null, 00:11:07.246 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:07.246 "is_configured": false, 00:11:07.246 "data_offset": 0, 00:11:07.246 "data_size": 65536 00:11:07.246 }, 00:11:07.246 { 00:11:07.246 "name": null, 00:11:07.246 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:07.246 "is_configured": false, 00:11:07.246 "data_offset": 0, 00:11:07.246 "data_size": 65536 00:11:07.246 }, 00:11:07.246 { 00:11:07.246 "name": "BaseBdev4", 00:11:07.246 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:07.246 "is_configured": true, 00:11:07.246 "data_offset": 0, 00:11:07.246 "data_size": 65536 00:11:07.246 } 00:11:07.246 ] 00:11:07.246 }' 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.246 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.506 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.506 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.506 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.506 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.506 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.765 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.765 10:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.765 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.765 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.765 [2024-11-20 10:34:10.996550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.765 10:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.765 "name": "Existed_Raid", 00:11:07.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.765 "strip_size_kb": 64, 00:11:07.765 "state": "configuring", 00:11:07.765 "raid_level": "raid0", 00:11:07.765 "superblock": false, 00:11:07.765 "num_base_bdevs": 4, 00:11:07.765 "num_base_bdevs_discovered": 3, 00:11:07.765 "num_base_bdevs_operational": 4, 00:11:07.765 "base_bdevs_list": [ 00:11:07.765 { 00:11:07.765 "name": "BaseBdev1", 00:11:07.765 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:07.765 "is_configured": true, 00:11:07.765 "data_offset": 0, 00:11:07.765 "data_size": 65536 00:11:07.765 }, 00:11:07.765 { 00:11:07.765 "name": null, 00:11:07.765 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:07.765 "is_configured": false, 00:11:07.765 "data_offset": 0, 00:11:07.765 "data_size": 65536 00:11:07.765 }, 00:11:07.765 { 00:11:07.765 "name": "BaseBdev3", 00:11:07.765 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:07.765 "is_configured": true, 00:11:07.765 "data_offset": 0, 00:11:07.765 "data_size": 65536 00:11:07.765 }, 00:11:07.765 { 00:11:07.765 "name": "BaseBdev4", 00:11:07.765 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:07.765 "is_configured": true, 00:11:07.765 "data_offset": 0, 00:11:07.765 "data_size": 65536 00:11:07.765 } 00:11:07.765 ] 00:11:07.765 }' 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.765 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.024 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.024 [2024-11-20 10:34:11.479749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.283 "name": "Existed_Raid", 00:11:08.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.283 "strip_size_kb": 64, 00:11:08.283 "state": "configuring", 00:11:08.283 "raid_level": "raid0", 00:11:08.283 "superblock": false, 00:11:08.283 "num_base_bdevs": 4, 00:11:08.283 "num_base_bdevs_discovered": 2, 00:11:08.283 "num_base_bdevs_operational": 4, 00:11:08.283 "base_bdevs_list": [ 00:11:08.283 { 00:11:08.283 "name": null, 00:11:08.283 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:08.283 "is_configured": false, 00:11:08.283 "data_offset": 0, 00:11:08.283 "data_size": 65536 00:11:08.283 }, 00:11:08.283 { 00:11:08.283 "name": null, 00:11:08.283 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:08.283 "is_configured": false, 00:11:08.283 "data_offset": 0, 00:11:08.283 "data_size": 65536 00:11:08.283 }, 00:11:08.283 { 00:11:08.283 "name": "BaseBdev3", 00:11:08.283 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:08.283 "is_configured": true, 00:11:08.283 "data_offset": 0, 00:11:08.283 "data_size": 65536 00:11:08.283 }, 00:11:08.283 { 00:11:08.283 "name": "BaseBdev4", 00:11:08.283 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:08.283 "is_configured": true, 00:11:08.283 "data_offset": 0, 00:11:08.283 "data_size": 65536 00:11:08.283 } 00:11:08.283 ] 00:11:08.283 }' 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.283 10:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.851 [2024-11-20 10:34:12.075636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.851 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.852 "name": "Existed_Raid", 00:11:08.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.852 "strip_size_kb": 64, 00:11:08.852 "state": "configuring", 00:11:08.852 "raid_level": "raid0", 00:11:08.852 "superblock": false, 00:11:08.852 "num_base_bdevs": 4, 00:11:08.852 "num_base_bdevs_discovered": 3, 00:11:08.852 "num_base_bdevs_operational": 4, 00:11:08.852 "base_bdevs_list": [ 00:11:08.852 { 00:11:08.852 "name": null, 00:11:08.852 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:08.852 "is_configured": false, 00:11:08.852 "data_offset": 0, 00:11:08.852 "data_size": 65536 00:11:08.852 }, 00:11:08.852 { 00:11:08.852 "name": "BaseBdev2", 00:11:08.852 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:08.852 "is_configured": true, 00:11:08.852 "data_offset": 0, 00:11:08.852 "data_size": 65536 00:11:08.852 }, 00:11:08.852 { 00:11:08.852 "name": "BaseBdev3", 00:11:08.852 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:08.852 "is_configured": true, 00:11:08.852 "data_offset": 0, 00:11:08.852 "data_size": 65536 00:11:08.852 }, 00:11:08.852 { 00:11:08.852 "name": "BaseBdev4", 00:11:08.852 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:08.852 "is_configured": true, 00:11:08.852 "data_offset": 0, 00:11:08.852 "data_size": 65536 00:11:08.852 } 00:11:08.852 ] 00:11:08.852 }' 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.852 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b23b757-e7fe-4516-b9ad-173febe2b067 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.111 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.372 [2024-11-20 10:34:12.605198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:09.372 [2024-11-20 10:34:12.605446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.372 [2024-11-20 10:34:12.605484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:09.372 [2024-11-20 10:34:12.605852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:09.372 [2024-11-20 10:34:12.606097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.372 [2024-11-20 10:34:12.606157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:09.372 [2024-11-20 10:34:12.606569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.372 NewBaseBdev 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.372 [ 00:11:09.372 { 00:11:09.372 "name": "NewBaseBdev", 00:11:09.372 "aliases": [ 00:11:09.372 "2b23b757-e7fe-4516-b9ad-173febe2b067" 00:11:09.372 ], 00:11:09.372 "product_name": "Malloc disk", 00:11:09.372 "block_size": 512, 00:11:09.372 "num_blocks": 65536, 00:11:09.372 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:09.372 "assigned_rate_limits": { 00:11:09.372 "rw_ios_per_sec": 0, 00:11:09.372 "rw_mbytes_per_sec": 0, 00:11:09.372 "r_mbytes_per_sec": 0, 00:11:09.372 "w_mbytes_per_sec": 0 00:11:09.372 }, 00:11:09.372 "claimed": true, 00:11:09.372 "claim_type": "exclusive_write", 00:11:09.372 "zoned": false, 00:11:09.372 "supported_io_types": { 00:11:09.372 "read": true, 00:11:09.372 "write": true, 00:11:09.372 "unmap": true, 00:11:09.372 "flush": true, 00:11:09.372 "reset": true, 00:11:09.372 "nvme_admin": false, 00:11:09.372 "nvme_io": false, 00:11:09.372 "nvme_io_md": false, 00:11:09.372 "write_zeroes": true, 00:11:09.372 "zcopy": true, 00:11:09.372 "get_zone_info": false, 00:11:09.372 "zone_management": false, 00:11:09.372 "zone_append": false, 00:11:09.372 "compare": false, 00:11:09.372 "compare_and_write": false, 00:11:09.372 "abort": true, 00:11:09.372 "seek_hole": false, 00:11:09.372 "seek_data": false, 00:11:09.372 "copy": true, 00:11:09.372 "nvme_iov_md": false 00:11:09.372 }, 00:11:09.372 "memory_domains": [ 00:11:09.372 { 00:11:09.372 "dma_device_id": "system", 00:11:09.372 "dma_device_type": 1 00:11:09.372 }, 00:11:09.372 { 00:11:09.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.372 "dma_device_type": 2 00:11:09.372 } 00:11:09.372 ], 00:11:09.372 "driver_specific": {} 00:11:09.372 } 00:11:09.372 ] 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.372 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.373 "name": "Existed_Raid", 00:11:09.373 "uuid": "05258770-d87d-40fe-bb2e-755a7e33c2b4", 00:11:09.373 "strip_size_kb": 64, 00:11:09.373 "state": "online", 00:11:09.373 "raid_level": "raid0", 00:11:09.373 "superblock": false, 00:11:09.373 "num_base_bdevs": 4, 00:11:09.373 "num_base_bdevs_discovered": 4, 00:11:09.373 "num_base_bdevs_operational": 4, 00:11:09.373 "base_bdevs_list": [ 00:11:09.373 { 00:11:09.373 "name": "NewBaseBdev", 00:11:09.373 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:09.373 "is_configured": true, 00:11:09.373 "data_offset": 0, 00:11:09.373 "data_size": 65536 00:11:09.373 }, 00:11:09.373 { 00:11:09.373 "name": "BaseBdev2", 00:11:09.373 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:09.373 "is_configured": true, 00:11:09.373 "data_offset": 0, 00:11:09.373 "data_size": 65536 00:11:09.373 }, 00:11:09.373 { 00:11:09.373 "name": "BaseBdev3", 00:11:09.373 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:09.373 "is_configured": true, 00:11:09.373 "data_offset": 0, 00:11:09.373 "data_size": 65536 00:11:09.373 }, 00:11:09.373 { 00:11:09.373 "name": "BaseBdev4", 00:11:09.373 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:09.373 "is_configured": true, 00:11:09.373 "data_offset": 0, 00:11:09.373 "data_size": 65536 00:11:09.373 } 00:11:09.373 ] 00:11:09.373 }' 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.373 10:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.633 [2024-11-20 10:34:13.080902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.633 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.633 "name": "Existed_Raid", 00:11:09.633 "aliases": [ 00:11:09.633 "05258770-d87d-40fe-bb2e-755a7e33c2b4" 00:11:09.633 ], 00:11:09.633 "product_name": "Raid Volume", 00:11:09.633 "block_size": 512, 00:11:09.633 "num_blocks": 262144, 00:11:09.633 "uuid": "05258770-d87d-40fe-bb2e-755a7e33c2b4", 00:11:09.633 "assigned_rate_limits": { 00:11:09.633 "rw_ios_per_sec": 0, 00:11:09.633 "rw_mbytes_per_sec": 0, 00:11:09.633 "r_mbytes_per_sec": 0, 00:11:09.633 "w_mbytes_per_sec": 0 00:11:09.633 }, 00:11:09.633 "claimed": false, 00:11:09.633 "zoned": false, 00:11:09.633 "supported_io_types": { 00:11:09.633 "read": true, 00:11:09.633 "write": true, 00:11:09.633 "unmap": true, 00:11:09.633 "flush": true, 00:11:09.633 "reset": true, 00:11:09.633 "nvme_admin": false, 00:11:09.633 "nvme_io": false, 00:11:09.633 "nvme_io_md": false, 00:11:09.633 "write_zeroes": true, 00:11:09.633 "zcopy": false, 00:11:09.633 "get_zone_info": false, 00:11:09.634 "zone_management": false, 00:11:09.634 "zone_append": false, 00:11:09.634 "compare": false, 00:11:09.634 "compare_and_write": false, 00:11:09.634 "abort": false, 00:11:09.634 "seek_hole": false, 00:11:09.634 "seek_data": false, 00:11:09.634 "copy": false, 00:11:09.634 "nvme_iov_md": false 00:11:09.634 }, 00:11:09.634 "memory_domains": [ 00:11:09.634 { 00:11:09.634 "dma_device_id": "system", 00:11:09.634 "dma_device_type": 1 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.634 "dma_device_type": 2 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "system", 00:11:09.634 "dma_device_type": 1 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.634 "dma_device_type": 2 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "system", 00:11:09.634 "dma_device_type": 1 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.634 "dma_device_type": 2 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "system", 00:11:09.634 "dma_device_type": 1 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.634 "dma_device_type": 2 00:11:09.634 } 00:11:09.634 ], 00:11:09.634 "driver_specific": { 00:11:09.634 "raid": { 00:11:09.634 "uuid": "05258770-d87d-40fe-bb2e-755a7e33c2b4", 00:11:09.634 "strip_size_kb": 64, 00:11:09.634 "state": "online", 00:11:09.634 "raid_level": "raid0", 00:11:09.634 "superblock": false, 00:11:09.634 "num_base_bdevs": 4, 00:11:09.634 "num_base_bdevs_discovered": 4, 00:11:09.634 "num_base_bdevs_operational": 4, 00:11:09.634 "base_bdevs_list": [ 00:11:09.634 { 00:11:09.634 "name": "NewBaseBdev", 00:11:09.634 "uuid": "2b23b757-e7fe-4516-b9ad-173febe2b067", 00:11:09.634 "is_configured": true, 00:11:09.634 "data_offset": 0, 00:11:09.634 "data_size": 65536 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "name": "BaseBdev2", 00:11:09.634 "uuid": "0fa1ad7d-2611-4e62-a3e1-d1d45aa44c73", 00:11:09.634 "is_configured": true, 00:11:09.634 "data_offset": 0, 00:11:09.634 "data_size": 65536 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "name": "BaseBdev3", 00:11:09.634 "uuid": "7c3d8331-e133-47fa-a302-d19426089d5f", 00:11:09.634 "is_configured": true, 00:11:09.634 "data_offset": 0, 00:11:09.634 "data_size": 65536 00:11:09.634 }, 00:11:09.634 { 00:11:09.634 "name": "BaseBdev4", 00:11:09.634 "uuid": "2c0d1710-0093-4e14-be9b-220558f5ea6a", 00:11:09.634 "is_configured": true, 00:11:09.634 "data_offset": 0, 00:11:09.634 "data_size": 65536 00:11:09.634 } 00:11:09.634 ] 00:11:09.634 } 00:11:09.634 } 00:11:09.634 }' 00:11:09.634 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.894 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:09.894 BaseBdev2 00:11:09.894 BaseBdev3 00:11:09.894 BaseBdev4' 00:11:09.894 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.895 [2024-11-20 10:34:13.356112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.895 [2024-11-20 10:34:13.356150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.895 [2024-11-20 10:34:13.356246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.895 [2024-11-20 10:34:13.356319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.895 [2024-11-20 10:34:13.356330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69562 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69562 ']' 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69562 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:09.895 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.155 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69562 00:11:10.155 killing process with pid 69562 00:11:10.155 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.155 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.155 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69562' 00:11:10.155 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69562 00:11:10.155 [2024-11-20 10:34:13.395420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.155 10:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69562 00:11:10.418 [2024-11-20 10:34:13.787213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:11.797 00:11:11.797 real 0m11.416s 00:11:11.797 user 0m18.147s 00:11:11.797 sys 0m1.988s 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.797 ************************************ 00:11:11.797 END TEST raid_state_function_test 00:11:11.797 ************************************ 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.797 10:34:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:11.797 10:34:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.797 10:34:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.797 10:34:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.797 ************************************ 00:11:11.797 START TEST raid_state_function_test_sb 00:11:11.797 ************************************ 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:11.797 10:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70230 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70230' 00:11:11.797 Process raid pid: 70230 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70230 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70230 ']' 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.797 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.797 [2024-11-20 10:34:15.083821] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:11.797 [2024-11-20 10:34:15.083944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.797 [2024-11-20 10:34:15.259049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.056 [2024-11-20 10:34:15.378868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.315 [2024-11-20 10:34:15.592192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.315 [2024-11-20 10:34:15.592229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.576 [2024-11-20 10:34:15.953881] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.576 [2024-11-20 10:34:15.953940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.576 [2024-11-20 10:34:15.953951] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.576 [2024-11-20 10:34:15.953961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.576 [2024-11-20 10:34:15.953967] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.576 [2024-11-20 10:34:15.953975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.576 [2024-11-20 10:34:15.953982] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.576 [2024-11-20 10:34:15.953990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.576 10:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.576 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.576 "name": "Existed_Raid", 00:11:12.576 "uuid": "558f14af-5e2c-4f43-8ddb-cb91e6fccf0e", 00:11:12.576 "strip_size_kb": 64, 00:11:12.576 "state": "configuring", 00:11:12.576 "raid_level": "raid0", 00:11:12.576 "superblock": true, 00:11:12.576 "num_base_bdevs": 4, 00:11:12.576 "num_base_bdevs_discovered": 0, 00:11:12.576 "num_base_bdevs_operational": 4, 00:11:12.576 "base_bdevs_list": [ 00:11:12.576 { 00:11:12.576 "name": "BaseBdev1", 00:11:12.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.576 "is_configured": false, 00:11:12.576 "data_offset": 0, 00:11:12.576 "data_size": 0 00:11:12.576 }, 00:11:12.576 { 00:11:12.576 "name": "BaseBdev2", 00:11:12.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.576 "is_configured": false, 00:11:12.576 "data_offset": 0, 00:11:12.576 "data_size": 0 00:11:12.576 }, 00:11:12.576 { 00:11:12.576 "name": "BaseBdev3", 00:11:12.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.576 "is_configured": false, 00:11:12.576 "data_offset": 0, 00:11:12.576 "data_size": 0 00:11:12.576 }, 00:11:12.576 { 00:11:12.576 "name": "BaseBdev4", 00:11:12.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.577 "is_configured": false, 00:11:12.577 "data_offset": 0, 00:11:12.577 "data_size": 0 00:11:12.577 } 00:11:12.577 ] 00:11:12.577 }' 00:11:12.577 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.577 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.144 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 [2024-11-20 10:34:16.417026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.144 [2024-11-20 10:34:16.417144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.144 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.144 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.145 [2024-11-20 10:34:16.429009] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.145 [2024-11-20 10:34:16.429104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.145 [2024-11-20 10:34:16.429164] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.145 [2024-11-20 10:34:16.429207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.145 [2024-11-20 10:34:16.429242] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.145 [2024-11-20 10:34:16.429286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.145 [2024-11-20 10:34:16.429319] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.145 [2024-11-20 10:34:16.429375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.145 [2024-11-20 10:34:16.476832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.145 BaseBdev1 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.145 [ 00:11:13.145 { 00:11:13.145 "name": "BaseBdev1", 00:11:13.145 "aliases": [ 00:11:13.145 "4bdf2799-fe98-4c7a-a0b5-91549b9b1092" 00:11:13.145 ], 00:11:13.145 "product_name": "Malloc disk", 00:11:13.145 "block_size": 512, 00:11:13.145 "num_blocks": 65536, 00:11:13.145 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:13.145 "assigned_rate_limits": { 00:11:13.145 "rw_ios_per_sec": 0, 00:11:13.145 "rw_mbytes_per_sec": 0, 00:11:13.145 "r_mbytes_per_sec": 0, 00:11:13.145 "w_mbytes_per_sec": 0 00:11:13.145 }, 00:11:13.145 "claimed": true, 00:11:13.145 "claim_type": "exclusive_write", 00:11:13.145 "zoned": false, 00:11:13.145 "supported_io_types": { 00:11:13.145 "read": true, 00:11:13.145 "write": true, 00:11:13.145 "unmap": true, 00:11:13.145 "flush": true, 00:11:13.145 "reset": true, 00:11:13.145 "nvme_admin": false, 00:11:13.145 "nvme_io": false, 00:11:13.145 "nvme_io_md": false, 00:11:13.145 "write_zeroes": true, 00:11:13.145 "zcopy": true, 00:11:13.145 "get_zone_info": false, 00:11:13.145 "zone_management": false, 00:11:13.145 "zone_append": false, 00:11:13.145 "compare": false, 00:11:13.145 "compare_and_write": false, 00:11:13.145 "abort": true, 00:11:13.145 "seek_hole": false, 00:11:13.145 "seek_data": false, 00:11:13.145 "copy": true, 00:11:13.145 "nvme_iov_md": false 00:11:13.145 }, 00:11:13.145 "memory_domains": [ 00:11:13.145 { 00:11:13.145 "dma_device_id": "system", 00:11:13.145 "dma_device_type": 1 00:11:13.145 }, 00:11:13.145 { 00:11:13.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.145 "dma_device_type": 2 00:11:13.145 } 00:11:13.145 ], 00:11:13.145 "driver_specific": {} 00:11:13.145 } 00:11:13.145 ] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.145 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.145 "name": "Existed_Raid", 00:11:13.145 "uuid": "47364c03-e809-4bab-a40c-1f40fd399ea0", 00:11:13.145 "strip_size_kb": 64, 00:11:13.145 "state": "configuring", 00:11:13.145 "raid_level": "raid0", 00:11:13.145 "superblock": true, 00:11:13.145 "num_base_bdevs": 4, 00:11:13.145 "num_base_bdevs_discovered": 1, 00:11:13.145 "num_base_bdevs_operational": 4, 00:11:13.145 "base_bdevs_list": [ 00:11:13.145 { 00:11:13.145 "name": "BaseBdev1", 00:11:13.145 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:13.145 "is_configured": true, 00:11:13.145 "data_offset": 2048, 00:11:13.145 "data_size": 63488 00:11:13.145 }, 00:11:13.145 { 00:11:13.145 "name": "BaseBdev2", 00:11:13.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.145 "is_configured": false, 00:11:13.145 "data_offset": 0, 00:11:13.145 "data_size": 0 00:11:13.145 }, 00:11:13.145 { 00:11:13.145 "name": "BaseBdev3", 00:11:13.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.145 "is_configured": false, 00:11:13.145 "data_offset": 0, 00:11:13.145 "data_size": 0 00:11:13.145 }, 00:11:13.145 { 00:11:13.145 "name": "BaseBdev4", 00:11:13.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.145 "is_configured": false, 00:11:13.146 "data_offset": 0, 00:11:13.146 "data_size": 0 00:11:13.146 } 00:11:13.146 ] 00:11:13.146 }' 00:11:13.146 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.146 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 [2024-11-20 10:34:16.992044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.714 [2024-11-20 10:34:16.992162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 10:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 [2024-11-20 10:34:17.004083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.714 [2024-11-20 10:34:17.006239] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.714 [2024-11-20 10:34:17.006336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.714 [2024-11-20 10:34:17.006416] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.714 [2024-11-20 10:34:17.006482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.714 [2024-11-20 10:34:17.006496] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.714 [2024-11-20 10:34:17.006507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.714 "name": "Existed_Raid", 00:11:13.714 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:13.714 "strip_size_kb": 64, 00:11:13.714 "state": "configuring", 00:11:13.714 "raid_level": "raid0", 00:11:13.714 "superblock": true, 00:11:13.714 "num_base_bdevs": 4, 00:11:13.714 "num_base_bdevs_discovered": 1, 00:11:13.714 "num_base_bdevs_operational": 4, 00:11:13.714 "base_bdevs_list": [ 00:11:13.714 { 00:11:13.714 "name": "BaseBdev1", 00:11:13.714 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:13.714 "is_configured": true, 00:11:13.714 "data_offset": 2048, 00:11:13.714 "data_size": 63488 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "name": "BaseBdev2", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "is_configured": false, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 0 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "name": "BaseBdev3", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "is_configured": false, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 0 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "name": "BaseBdev4", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "is_configured": false, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 0 00:11:13.714 } 00:11:13.714 ] 00:11:13.714 }' 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.714 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 [2024-11-20 10:34:17.520198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.282 BaseBdev2 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 [ 00:11:14.282 { 00:11:14.282 "name": "BaseBdev2", 00:11:14.282 "aliases": [ 00:11:14.282 "d96b4b30-9fd6-4637-8d79-3a9d6d466dee" 00:11:14.282 ], 00:11:14.282 "product_name": "Malloc disk", 00:11:14.282 "block_size": 512, 00:11:14.282 "num_blocks": 65536, 00:11:14.282 "uuid": "d96b4b30-9fd6-4637-8d79-3a9d6d466dee", 00:11:14.282 "assigned_rate_limits": { 00:11:14.282 "rw_ios_per_sec": 0, 00:11:14.282 "rw_mbytes_per_sec": 0, 00:11:14.282 "r_mbytes_per_sec": 0, 00:11:14.282 "w_mbytes_per_sec": 0 00:11:14.282 }, 00:11:14.282 "claimed": true, 00:11:14.282 "claim_type": "exclusive_write", 00:11:14.282 "zoned": false, 00:11:14.282 "supported_io_types": { 00:11:14.282 "read": true, 00:11:14.282 "write": true, 00:11:14.282 "unmap": true, 00:11:14.282 "flush": true, 00:11:14.282 "reset": true, 00:11:14.282 "nvme_admin": false, 00:11:14.282 "nvme_io": false, 00:11:14.282 "nvme_io_md": false, 00:11:14.282 "write_zeroes": true, 00:11:14.282 "zcopy": true, 00:11:14.282 "get_zone_info": false, 00:11:14.282 "zone_management": false, 00:11:14.282 "zone_append": false, 00:11:14.282 "compare": false, 00:11:14.282 "compare_and_write": false, 00:11:14.282 "abort": true, 00:11:14.282 "seek_hole": false, 00:11:14.282 "seek_data": false, 00:11:14.282 "copy": true, 00:11:14.282 "nvme_iov_md": false 00:11:14.282 }, 00:11:14.282 "memory_domains": [ 00:11:14.282 { 00:11:14.282 "dma_device_id": "system", 00:11:14.282 "dma_device_type": 1 00:11:14.282 }, 00:11:14.282 { 00:11:14.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.282 "dma_device_type": 2 00:11:14.282 } 00:11:14.282 ], 00:11:14.282 "driver_specific": {} 00:11:14.282 } 00:11:14.282 ] 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.282 "name": "Existed_Raid", 00:11:14.282 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:14.282 "strip_size_kb": 64, 00:11:14.282 "state": "configuring", 00:11:14.282 "raid_level": "raid0", 00:11:14.282 "superblock": true, 00:11:14.282 "num_base_bdevs": 4, 00:11:14.282 "num_base_bdevs_discovered": 2, 00:11:14.282 "num_base_bdevs_operational": 4, 00:11:14.282 "base_bdevs_list": [ 00:11:14.282 { 00:11:14.282 "name": "BaseBdev1", 00:11:14.282 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:14.282 "is_configured": true, 00:11:14.282 "data_offset": 2048, 00:11:14.282 "data_size": 63488 00:11:14.282 }, 00:11:14.282 { 00:11:14.282 "name": "BaseBdev2", 00:11:14.282 "uuid": "d96b4b30-9fd6-4637-8d79-3a9d6d466dee", 00:11:14.282 "is_configured": true, 00:11:14.282 "data_offset": 2048, 00:11:14.282 "data_size": 63488 00:11:14.282 }, 00:11:14.282 { 00:11:14.282 "name": "BaseBdev3", 00:11:14.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.282 "is_configured": false, 00:11:14.282 "data_offset": 0, 00:11:14.282 "data_size": 0 00:11:14.282 }, 00:11:14.282 { 00:11:14.282 "name": "BaseBdev4", 00:11:14.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.282 "is_configured": false, 00:11:14.282 "data_offset": 0, 00:11:14.282 "data_size": 0 00:11:14.282 } 00:11:14.282 ] 00:11:14.282 }' 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.282 10:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.852 [2024-11-20 10:34:18.088397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.852 BaseBdev3 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.852 [ 00:11:14.852 { 00:11:14.852 "name": "BaseBdev3", 00:11:14.852 "aliases": [ 00:11:14.852 "44dbe840-dd3e-4d47-ad68-4e430db2bff6" 00:11:14.852 ], 00:11:14.852 "product_name": "Malloc disk", 00:11:14.852 "block_size": 512, 00:11:14.852 "num_blocks": 65536, 00:11:14.852 "uuid": "44dbe840-dd3e-4d47-ad68-4e430db2bff6", 00:11:14.852 "assigned_rate_limits": { 00:11:14.852 "rw_ios_per_sec": 0, 00:11:14.852 "rw_mbytes_per_sec": 0, 00:11:14.852 "r_mbytes_per_sec": 0, 00:11:14.852 "w_mbytes_per_sec": 0 00:11:14.852 }, 00:11:14.852 "claimed": true, 00:11:14.852 "claim_type": "exclusive_write", 00:11:14.852 "zoned": false, 00:11:14.852 "supported_io_types": { 00:11:14.852 "read": true, 00:11:14.852 "write": true, 00:11:14.852 "unmap": true, 00:11:14.852 "flush": true, 00:11:14.852 "reset": true, 00:11:14.852 "nvme_admin": false, 00:11:14.852 "nvme_io": false, 00:11:14.852 "nvme_io_md": false, 00:11:14.852 "write_zeroes": true, 00:11:14.852 "zcopy": true, 00:11:14.852 "get_zone_info": false, 00:11:14.852 "zone_management": false, 00:11:14.852 "zone_append": false, 00:11:14.852 "compare": false, 00:11:14.852 "compare_and_write": false, 00:11:14.852 "abort": true, 00:11:14.852 "seek_hole": false, 00:11:14.852 "seek_data": false, 00:11:14.852 "copy": true, 00:11:14.852 "nvme_iov_md": false 00:11:14.852 }, 00:11:14.852 "memory_domains": [ 00:11:14.852 { 00:11:14.852 "dma_device_id": "system", 00:11:14.852 "dma_device_type": 1 00:11:14.852 }, 00:11:14.852 { 00:11:14.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.852 "dma_device_type": 2 00:11:14.852 } 00:11:14.852 ], 00:11:14.852 "driver_specific": {} 00:11:14.852 } 00:11:14.852 ] 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.852 "name": "Existed_Raid", 00:11:14.852 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:14.852 "strip_size_kb": 64, 00:11:14.852 "state": "configuring", 00:11:14.852 "raid_level": "raid0", 00:11:14.852 "superblock": true, 00:11:14.852 "num_base_bdevs": 4, 00:11:14.852 "num_base_bdevs_discovered": 3, 00:11:14.852 "num_base_bdevs_operational": 4, 00:11:14.852 "base_bdevs_list": [ 00:11:14.852 { 00:11:14.852 "name": "BaseBdev1", 00:11:14.852 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:14.852 "is_configured": true, 00:11:14.852 "data_offset": 2048, 00:11:14.852 "data_size": 63488 00:11:14.852 }, 00:11:14.852 { 00:11:14.852 "name": "BaseBdev2", 00:11:14.852 "uuid": "d96b4b30-9fd6-4637-8d79-3a9d6d466dee", 00:11:14.852 "is_configured": true, 00:11:14.852 "data_offset": 2048, 00:11:14.852 "data_size": 63488 00:11:14.852 }, 00:11:14.852 { 00:11:14.852 "name": "BaseBdev3", 00:11:14.852 "uuid": "44dbe840-dd3e-4d47-ad68-4e430db2bff6", 00:11:14.852 "is_configured": true, 00:11:14.852 "data_offset": 2048, 00:11:14.852 "data_size": 63488 00:11:14.852 }, 00:11:14.852 { 00:11:14.852 "name": "BaseBdev4", 00:11:14.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.852 "is_configured": false, 00:11:14.852 "data_offset": 0, 00:11:14.852 "data_size": 0 00:11:14.852 } 00:11:14.852 ] 00:11:14.852 }' 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.852 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.112 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.112 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.112 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.373 [2024-11-20 10:34:18.628603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.373 [2024-11-20 10:34:18.629006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.373 [2024-11-20 10:34:18.629067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:15.373 [2024-11-20 10:34:18.629430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:15.373 [2024-11-20 10:34:18.629659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.373 [2024-11-20 10:34:18.629716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:11:15.373 id_bdev 0x617000007e80 00:11:15.373 [2024-11-20 10:34:18.630001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.373 [ 00:11:15.373 { 00:11:15.373 "name": "BaseBdev4", 00:11:15.373 "aliases": [ 00:11:15.373 "7453b49c-7d55-417c-a905-1b3cfadbb791" 00:11:15.373 ], 00:11:15.373 "product_name": "Malloc disk", 00:11:15.373 "block_size": 512, 00:11:15.373 "num_blocks": 65536, 00:11:15.373 "uuid": "7453b49c-7d55-417c-a905-1b3cfadbb791", 00:11:15.373 "assigned_rate_limits": { 00:11:15.373 "rw_ios_per_sec": 0, 00:11:15.373 "rw_mbytes_per_sec": 0, 00:11:15.373 "r_mbytes_per_sec": 0, 00:11:15.373 "w_mbytes_per_sec": 0 00:11:15.373 }, 00:11:15.373 "claimed": true, 00:11:15.373 "claim_type": "exclusive_write", 00:11:15.373 "zoned": false, 00:11:15.373 "supported_io_types": { 00:11:15.373 "read": true, 00:11:15.373 "write": true, 00:11:15.373 "unmap": true, 00:11:15.373 "flush": true, 00:11:15.373 "reset": true, 00:11:15.373 "nvme_admin": false, 00:11:15.373 "nvme_io": false, 00:11:15.373 "nvme_io_md": false, 00:11:15.373 "write_zeroes": true, 00:11:15.373 "zcopy": true, 00:11:15.373 "get_zone_info": false, 00:11:15.373 "zone_management": false, 00:11:15.373 "zone_append": false, 00:11:15.373 "compare": false, 00:11:15.373 "compare_and_write": false, 00:11:15.373 "abort": true, 00:11:15.373 "seek_hole": false, 00:11:15.373 "seek_data": false, 00:11:15.373 "copy": true, 00:11:15.373 "nvme_iov_md": false 00:11:15.373 }, 00:11:15.373 "memory_domains": [ 00:11:15.373 { 00:11:15.373 "dma_device_id": "system", 00:11:15.373 "dma_device_type": 1 00:11:15.373 }, 00:11:15.373 { 00:11:15.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.373 "dma_device_type": 2 00:11:15.373 } 00:11:15.373 ], 00:11:15.373 "driver_specific": {} 00:11:15.373 } 00:11:15.373 ] 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.373 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.373 "name": "Existed_Raid", 00:11:15.373 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:15.373 "strip_size_kb": 64, 00:11:15.373 "state": "online", 00:11:15.373 "raid_level": "raid0", 00:11:15.373 "superblock": true, 00:11:15.373 "num_base_bdevs": 4, 00:11:15.373 "num_base_bdevs_discovered": 4, 00:11:15.373 "num_base_bdevs_operational": 4, 00:11:15.373 "base_bdevs_list": [ 00:11:15.373 { 00:11:15.373 "name": "BaseBdev1", 00:11:15.373 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:15.373 "is_configured": true, 00:11:15.373 "data_offset": 2048, 00:11:15.373 "data_size": 63488 00:11:15.373 }, 00:11:15.373 { 00:11:15.373 "name": "BaseBdev2", 00:11:15.373 "uuid": "d96b4b30-9fd6-4637-8d79-3a9d6d466dee", 00:11:15.373 "is_configured": true, 00:11:15.373 "data_offset": 2048, 00:11:15.373 "data_size": 63488 00:11:15.373 }, 00:11:15.373 { 00:11:15.373 "name": "BaseBdev3", 00:11:15.373 "uuid": "44dbe840-dd3e-4d47-ad68-4e430db2bff6", 00:11:15.373 "is_configured": true, 00:11:15.373 "data_offset": 2048, 00:11:15.373 "data_size": 63488 00:11:15.373 }, 00:11:15.373 { 00:11:15.373 "name": "BaseBdev4", 00:11:15.373 "uuid": "7453b49c-7d55-417c-a905-1b3cfadbb791", 00:11:15.373 "is_configured": true, 00:11:15.373 "data_offset": 2048, 00:11:15.373 "data_size": 63488 00:11:15.373 } 00:11:15.373 ] 00:11:15.373 }' 00:11:15.374 10:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.374 10:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.633 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.634 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.634 [2024-11-20 10:34:19.108323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.893 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.893 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.893 "name": "Existed_Raid", 00:11:15.893 "aliases": [ 00:11:15.893 "be39488a-dd0f-4bde-a1d4-44319886fd51" 00:11:15.893 ], 00:11:15.893 "product_name": "Raid Volume", 00:11:15.893 "block_size": 512, 00:11:15.893 "num_blocks": 253952, 00:11:15.893 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:15.893 "assigned_rate_limits": { 00:11:15.893 "rw_ios_per_sec": 0, 00:11:15.893 "rw_mbytes_per_sec": 0, 00:11:15.893 "r_mbytes_per_sec": 0, 00:11:15.893 "w_mbytes_per_sec": 0 00:11:15.893 }, 00:11:15.893 "claimed": false, 00:11:15.893 "zoned": false, 00:11:15.893 "supported_io_types": { 00:11:15.893 "read": true, 00:11:15.893 "write": true, 00:11:15.893 "unmap": true, 00:11:15.893 "flush": true, 00:11:15.893 "reset": true, 00:11:15.893 "nvme_admin": false, 00:11:15.893 "nvme_io": false, 00:11:15.893 "nvme_io_md": false, 00:11:15.893 "write_zeroes": true, 00:11:15.893 "zcopy": false, 00:11:15.893 "get_zone_info": false, 00:11:15.893 "zone_management": false, 00:11:15.893 "zone_append": false, 00:11:15.893 "compare": false, 00:11:15.893 "compare_and_write": false, 00:11:15.893 "abort": false, 00:11:15.893 "seek_hole": false, 00:11:15.893 "seek_data": false, 00:11:15.893 "copy": false, 00:11:15.893 "nvme_iov_md": false 00:11:15.893 }, 00:11:15.893 "memory_domains": [ 00:11:15.893 { 00:11:15.893 "dma_device_id": "system", 00:11:15.893 "dma_device_type": 1 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.893 "dma_device_type": 2 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "system", 00:11:15.893 "dma_device_type": 1 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.893 "dma_device_type": 2 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "system", 00:11:15.893 "dma_device_type": 1 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.893 "dma_device_type": 2 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "system", 00:11:15.893 "dma_device_type": 1 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.893 "dma_device_type": 2 00:11:15.893 } 00:11:15.893 ], 00:11:15.893 "driver_specific": { 00:11:15.893 "raid": { 00:11:15.893 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:15.893 "strip_size_kb": 64, 00:11:15.893 "state": "online", 00:11:15.893 "raid_level": "raid0", 00:11:15.893 "superblock": true, 00:11:15.893 "num_base_bdevs": 4, 00:11:15.893 "num_base_bdevs_discovered": 4, 00:11:15.893 "num_base_bdevs_operational": 4, 00:11:15.893 "base_bdevs_list": [ 00:11:15.893 { 00:11:15.893 "name": "BaseBdev1", 00:11:15.893 "uuid": "4bdf2799-fe98-4c7a-a0b5-91549b9b1092", 00:11:15.893 "is_configured": true, 00:11:15.893 "data_offset": 2048, 00:11:15.894 "data_size": 63488 00:11:15.894 }, 00:11:15.894 { 00:11:15.894 "name": "BaseBdev2", 00:11:15.894 "uuid": "d96b4b30-9fd6-4637-8d79-3a9d6d466dee", 00:11:15.894 "is_configured": true, 00:11:15.894 "data_offset": 2048, 00:11:15.894 "data_size": 63488 00:11:15.894 }, 00:11:15.894 { 00:11:15.894 "name": "BaseBdev3", 00:11:15.894 "uuid": "44dbe840-dd3e-4d47-ad68-4e430db2bff6", 00:11:15.894 "is_configured": true, 00:11:15.894 "data_offset": 2048, 00:11:15.894 "data_size": 63488 00:11:15.894 }, 00:11:15.894 { 00:11:15.894 "name": "BaseBdev4", 00:11:15.894 "uuid": "7453b49c-7d55-417c-a905-1b3cfadbb791", 00:11:15.894 "is_configured": true, 00:11:15.894 "data_offset": 2048, 00:11:15.894 "data_size": 63488 00:11:15.894 } 00:11:15.894 ] 00:11:15.894 } 00:11:15.894 } 00:11:15.894 }' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.894 BaseBdev2 00:11:15.894 BaseBdev3 00:11:15.894 BaseBdev4' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.894 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.154 [2024-11-20 10:34:19.463471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.154 [2024-11-20 10:34:19.463504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.154 [2024-11-20 10:34:19.463559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.154 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.154 "name": "Existed_Raid", 00:11:16.154 "uuid": "be39488a-dd0f-4bde-a1d4-44319886fd51", 00:11:16.154 "strip_size_kb": 64, 00:11:16.154 "state": "offline", 00:11:16.154 "raid_level": "raid0", 00:11:16.154 "superblock": true, 00:11:16.154 "num_base_bdevs": 4, 00:11:16.154 "num_base_bdevs_discovered": 3, 00:11:16.154 "num_base_bdevs_operational": 3, 00:11:16.154 "base_bdevs_list": [ 00:11:16.154 { 00:11:16.154 "name": null, 00:11:16.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.154 "is_configured": false, 00:11:16.154 "data_offset": 0, 00:11:16.154 "data_size": 63488 00:11:16.154 }, 00:11:16.154 { 00:11:16.154 "name": "BaseBdev2", 00:11:16.154 "uuid": "d96b4b30-9fd6-4637-8d79-3a9d6d466dee", 00:11:16.154 "is_configured": true, 00:11:16.154 "data_offset": 2048, 00:11:16.154 "data_size": 63488 00:11:16.154 }, 00:11:16.154 { 00:11:16.154 "name": "BaseBdev3", 00:11:16.154 "uuid": "44dbe840-dd3e-4d47-ad68-4e430db2bff6", 00:11:16.154 "is_configured": true, 00:11:16.154 "data_offset": 2048, 00:11:16.154 "data_size": 63488 00:11:16.154 }, 00:11:16.154 { 00:11:16.154 "name": "BaseBdev4", 00:11:16.154 "uuid": "7453b49c-7d55-417c-a905-1b3cfadbb791", 00:11:16.154 "is_configured": true, 00:11:16.154 "data_offset": 2048, 00:11:16.155 "data_size": 63488 00:11:16.155 } 00:11:16.155 ] 00:11:16.155 }' 00:11:16.155 10:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.155 10:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.724 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.725 [2024-11-20 10:34:20.067750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.725 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.984 [2024-11-20 10:34:20.245514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.984 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.985 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:16.985 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.985 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.985 [2024-11-20 10:34:20.417317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:16.985 [2024-11-20 10:34:20.417504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 BaseBdev2 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 [ 00:11:17.245 { 00:11:17.245 "name": "BaseBdev2", 00:11:17.245 "aliases": [ 00:11:17.245 "c9c8be5b-6970-4481-8b0b-cd03bd3832cf" 00:11:17.245 ], 00:11:17.245 "product_name": "Malloc disk", 00:11:17.245 "block_size": 512, 00:11:17.245 "num_blocks": 65536, 00:11:17.245 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:17.245 "assigned_rate_limits": { 00:11:17.245 "rw_ios_per_sec": 0, 00:11:17.245 "rw_mbytes_per_sec": 0, 00:11:17.245 "r_mbytes_per_sec": 0, 00:11:17.245 "w_mbytes_per_sec": 0 00:11:17.245 }, 00:11:17.245 "claimed": false, 00:11:17.245 "zoned": false, 00:11:17.245 "supported_io_types": { 00:11:17.245 "read": true, 00:11:17.245 "write": true, 00:11:17.245 "unmap": true, 00:11:17.245 "flush": true, 00:11:17.245 "reset": true, 00:11:17.245 "nvme_admin": false, 00:11:17.245 "nvme_io": false, 00:11:17.245 "nvme_io_md": false, 00:11:17.245 "write_zeroes": true, 00:11:17.245 "zcopy": true, 00:11:17.245 "get_zone_info": false, 00:11:17.245 "zone_management": false, 00:11:17.245 "zone_append": false, 00:11:17.245 "compare": false, 00:11:17.245 "compare_and_write": false, 00:11:17.245 "abort": true, 00:11:17.245 "seek_hole": false, 00:11:17.245 "seek_data": false, 00:11:17.245 "copy": true, 00:11:17.245 "nvme_iov_md": false 00:11:17.245 }, 00:11:17.245 "memory_domains": [ 00:11:17.245 { 00:11:17.245 "dma_device_id": "system", 00:11:17.245 "dma_device_type": 1 00:11:17.245 }, 00:11:17.245 { 00:11:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.245 "dma_device_type": 2 00:11:17.245 } 00:11:17.245 ], 00:11:17.245 "driver_specific": {} 00:11:17.245 } 00:11:17.245 ] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 BaseBdev3 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.245 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 [ 00:11:17.506 { 00:11:17.506 "name": "BaseBdev3", 00:11:17.506 "aliases": [ 00:11:17.506 "f2787794-baa8-40b9-8157-ec2670bf5dc4" 00:11:17.506 ], 00:11:17.506 "product_name": "Malloc disk", 00:11:17.506 "block_size": 512, 00:11:17.506 "num_blocks": 65536, 00:11:17.506 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:17.506 "assigned_rate_limits": { 00:11:17.506 "rw_ios_per_sec": 0, 00:11:17.506 "rw_mbytes_per_sec": 0, 00:11:17.506 "r_mbytes_per_sec": 0, 00:11:17.506 "w_mbytes_per_sec": 0 00:11:17.506 }, 00:11:17.506 "claimed": false, 00:11:17.506 "zoned": false, 00:11:17.506 "supported_io_types": { 00:11:17.506 "read": true, 00:11:17.506 "write": true, 00:11:17.506 "unmap": true, 00:11:17.506 "flush": true, 00:11:17.506 "reset": true, 00:11:17.506 "nvme_admin": false, 00:11:17.506 "nvme_io": false, 00:11:17.506 "nvme_io_md": false, 00:11:17.506 "write_zeroes": true, 00:11:17.506 "zcopy": true, 00:11:17.506 "get_zone_info": false, 00:11:17.506 "zone_management": false, 00:11:17.506 "zone_append": false, 00:11:17.506 "compare": false, 00:11:17.506 "compare_and_write": false, 00:11:17.506 "abort": true, 00:11:17.506 "seek_hole": false, 00:11:17.506 "seek_data": false, 00:11:17.506 "copy": true, 00:11:17.506 "nvme_iov_md": false 00:11:17.506 }, 00:11:17.506 "memory_domains": [ 00:11:17.506 { 00:11:17.506 "dma_device_id": "system", 00:11:17.506 "dma_device_type": 1 00:11:17.506 }, 00:11:17.506 { 00:11:17.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.506 "dma_device_type": 2 00:11:17.506 } 00:11:17.506 ], 00:11:17.506 "driver_specific": {} 00:11:17.506 } 00:11:17.506 ] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 BaseBdev4 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 [ 00:11:17.506 { 00:11:17.506 "name": "BaseBdev4", 00:11:17.506 "aliases": [ 00:11:17.506 "20df6069-8464-4ef4-9696-fabc9d260861" 00:11:17.506 ], 00:11:17.506 "product_name": "Malloc disk", 00:11:17.506 "block_size": 512, 00:11:17.506 "num_blocks": 65536, 00:11:17.506 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:17.506 "assigned_rate_limits": { 00:11:17.506 "rw_ios_per_sec": 0, 00:11:17.506 "rw_mbytes_per_sec": 0, 00:11:17.506 "r_mbytes_per_sec": 0, 00:11:17.506 "w_mbytes_per_sec": 0 00:11:17.506 }, 00:11:17.506 "claimed": false, 00:11:17.506 "zoned": false, 00:11:17.506 "supported_io_types": { 00:11:17.506 "read": true, 00:11:17.506 "write": true, 00:11:17.506 "unmap": true, 00:11:17.506 "flush": true, 00:11:17.506 "reset": true, 00:11:17.506 "nvme_admin": false, 00:11:17.506 "nvme_io": false, 00:11:17.506 "nvme_io_md": false, 00:11:17.506 "write_zeroes": true, 00:11:17.506 "zcopy": true, 00:11:17.506 "get_zone_info": false, 00:11:17.506 "zone_management": false, 00:11:17.506 "zone_append": false, 00:11:17.506 "compare": false, 00:11:17.506 "compare_and_write": false, 00:11:17.506 "abort": true, 00:11:17.506 "seek_hole": false, 00:11:17.506 "seek_data": false, 00:11:17.506 "copy": true, 00:11:17.506 "nvme_iov_md": false 00:11:17.506 }, 00:11:17.506 "memory_domains": [ 00:11:17.506 { 00:11:17.506 "dma_device_id": "system", 00:11:17.506 "dma_device_type": 1 00:11:17.506 }, 00:11:17.506 { 00:11:17.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.506 "dma_device_type": 2 00:11:17.506 } 00:11:17.506 ], 00:11:17.506 "driver_specific": {} 00:11:17.506 } 00:11:17.506 ] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.506 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 [2024-11-20 10:34:20.845247] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.506 [2024-11-20 10:34:20.845377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.507 [2024-11-20 10:34:20.845448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.507 [2024-11-20 10:34:20.847599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.507 [2024-11-20 10:34:20.847709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.507 "name": "Existed_Raid", 00:11:17.507 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:17.507 "strip_size_kb": 64, 00:11:17.507 "state": "configuring", 00:11:17.507 "raid_level": "raid0", 00:11:17.507 "superblock": true, 00:11:17.507 "num_base_bdevs": 4, 00:11:17.507 "num_base_bdevs_discovered": 3, 00:11:17.507 "num_base_bdevs_operational": 4, 00:11:17.507 "base_bdevs_list": [ 00:11:17.507 { 00:11:17.507 "name": "BaseBdev1", 00:11:17.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.507 "is_configured": false, 00:11:17.507 "data_offset": 0, 00:11:17.507 "data_size": 0 00:11:17.507 }, 00:11:17.507 { 00:11:17.507 "name": "BaseBdev2", 00:11:17.507 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:17.507 "is_configured": true, 00:11:17.507 "data_offset": 2048, 00:11:17.507 "data_size": 63488 00:11:17.507 }, 00:11:17.507 { 00:11:17.507 "name": "BaseBdev3", 00:11:17.507 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:17.507 "is_configured": true, 00:11:17.507 "data_offset": 2048, 00:11:17.507 "data_size": 63488 00:11:17.507 }, 00:11:17.507 { 00:11:17.507 "name": "BaseBdev4", 00:11:17.507 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:17.507 "is_configured": true, 00:11:17.507 "data_offset": 2048, 00:11:17.507 "data_size": 63488 00:11:17.507 } 00:11:17.507 ] 00:11:17.507 }' 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.507 10:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.079 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.080 [2024-11-20 10:34:21.256584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.080 "name": "Existed_Raid", 00:11:18.080 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:18.080 "strip_size_kb": 64, 00:11:18.080 "state": "configuring", 00:11:18.080 "raid_level": "raid0", 00:11:18.080 "superblock": true, 00:11:18.080 "num_base_bdevs": 4, 00:11:18.080 "num_base_bdevs_discovered": 2, 00:11:18.080 "num_base_bdevs_operational": 4, 00:11:18.080 "base_bdevs_list": [ 00:11:18.080 { 00:11:18.080 "name": "BaseBdev1", 00:11:18.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.080 "is_configured": false, 00:11:18.080 "data_offset": 0, 00:11:18.080 "data_size": 0 00:11:18.080 }, 00:11:18.080 { 00:11:18.080 "name": null, 00:11:18.080 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:18.080 "is_configured": false, 00:11:18.080 "data_offset": 0, 00:11:18.080 "data_size": 63488 00:11:18.080 }, 00:11:18.080 { 00:11:18.080 "name": "BaseBdev3", 00:11:18.080 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:18.080 "is_configured": true, 00:11:18.080 "data_offset": 2048, 00:11:18.080 "data_size": 63488 00:11:18.080 }, 00:11:18.080 { 00:11:18.080 "name": "BaseBdev4", 00:11:18.080 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:18.080 "is_configured": true, 00:11:18.080 "data_offset": 2048, 00:11:18.080 "data_size": 63488 00:11:18.080 } 00:11:18.080 ] 00:11:18.080 }' 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.080 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.340 [2024-11-20 10:34:21.785663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.340 BaseBdev1 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.340 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.340 [ 00:11:18.340 { 00:11:18.340 "name": "BaseBdev1", 00:11:18.340 "aliases": [ 00:11:18.340 "4fff2071-97a9-4fff-b9b2-b8991d7fd371" 00:11:18.340 ], 00:11:18.340 "product_name": "Malloc disk", 00:11:18.340 "block_size": 512, 00:11:18.340 "num_blocks": 65536, 00:11:18.340 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:18.340 "assigned_rate_limits": { 00:11:18.601 "rw_ios_per_sec": 0, 00:11:18.601 "rw_mbytes_per_sec": 0, 00:11:18.601 "r_mbytes_per_sec": 0, 00:11:18.601 "w_mbytes_per_sec": 0 00:11:18.601 }, 00:11:18.601 "claimed": true, 00:11:18.601 "claim_type": "exclusive_write", 00:11:18.601 "zoned": false, 00:11:18.601 "supported_io_types": { 00:11:18.601 "read": true, 00:11:18.601 "write": true, 00:11:18.601 "unmap": true, 00:11:18.601 "flush": true, 00:11:18.601 "reset": true, 00:11:18.601 "nvme_admin": false, 00:11:18.601 "nvme_io": false, 00:11:18.601 "nvme_io_md": false, 00:11:18.601 "write_zeroes": true, 00:11:18.601 "zcopy": true, 00:11:18.601 "get_zone_info": false, 00:11:18.601 "zone_management": false, 00:11:18.601 "zone_append": false, 00:11:18.601 "compare": false, 00:11:18.601 "compare_and_write": false, 00:11:18.601 "abort": true, 00:11:18.601 "seek_hole": false, 00:11:18.601 "seek_data": false, 00:11:18.601 "copy": true, 00:11:18.601 "nvme_iov_md": false 00:11:18.601 }, 00:11:18.601 "memory_domains": [ 00:11:18.601 { 00:11:18.601 "dma_device_id": "system", 00:11:18.601 "dma_device_type": 1 00:11:18.601 }, 00:11:18.601 { 00:11:18.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.601 "dma_device_type": 2 00:11:18.601 } 00:11:18.601 ], 00:11:18.601 "driver_specific": {} 00:11:18.601 } 00:11:18.601 ] 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.601 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.601 "name": "Existed_Raid", 00:11:18.601 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:18.602 "strip_size_kb": 64, 00:11:18.602 "state": "configuring", 00:11:18.602 "raid_level": "raid0", 00:11:18.602 "superblock": true, 00:11:18.602 "num_base_bdevs": 4, 00:11:18.602 "num_base_bdevs_discovered": 3, 00:11:18.602 "num_base_bdevs_operational": 4, 00:11:18.602 "base_bdevs_list": [ 00:11:18.602 { 00:11:18.602 "name": "BaseBdev1", 00:11:18.602 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:18.602 "is_configured": true, 00:11:18.602 "data_offset": 2048, 00:11:18.602 "data_size": 63488 00:11:18.602 }, 00:11:18.602 { 00:11:18.602 "name": null, 00:11:18.602 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:18.602 "is_configured": false, 00:11:18.602 "data_offset": 0, 00:11:18.602 "data_size": 63488 00:11:18.602 }, 00:11:18.602 { 00:11:18.602 "name": "BaseBdev3", 00:11:18.602 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:18.602 "is_configured": true, 00:11:18.602 "data_offset": 2048, 00:11:18.602 "data_size": 63488 00:11:18.602 }, 00:11:18.602 { 00:11:18.602 "name": "BaseBdev4", 00:11:18.602 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:18.602 "is_configured": true, 00:11:18.602 "data_offset": 2048, 00:11:18.602 "data_size": 63488 00:11:18.602 } 00:11:18.602 ] 00:11:18.602 }' 00:11:18.602 10:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.602 10:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.861 [2024-11-20 10:34:22.300898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.861 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.121 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.121 "name": "Existed_Raid", 00:11:19.121 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:19.121 "strip_size_kb": 64, 00:11:19.121 "state": "configuring", 00:11:19.121 "raid_level": "raid0", 00:11:19.121 "superblock": true, 00:11:19.121 "num_base_bdevs": 4, 00:11:19.121 "num_base_bdevs_discovered": 2, 00:11:19.121 "num_base_bdevs_operational": 4, 00:11:19.121 "base_bdevs_list": [ 00:11:19.121 { 00:11:19.121 "name": "BaseBdev1", 00:11:19.121 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:19.121 "is_configured": true, 00:11:19.121 "data_offset": 2048, 00:11:19.121 "data_size": 63488 00:11:19.121 }, 00:11:19.121 { 00:11:19.121 "name": null, 00:11:19.121 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:19.121 "is_configured": false, 00:11:19.121 "data_offset": 0, 00:11:19.121 "data_size": 63488 00:11:19.121 }, 00:11:19.121 { 00:11:19.121 "name": null, 00:11:19.121 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:19.121 "is_configured": false, 00:11:19.121 "data_offset": 0, 00:11:19.121 "data_size": 63488 00:11:19.121 }, 00:11:19.121 { 00:11:19.121 "name": "BaseBdev4", 00:11:19.121 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:19.121 "is_configured": true, 00:11:19.121 "data_offset": 2048, 00:11:19.121 "data_size": 63488 00:11:19.121 } 00:11:19.121 ] 00:11:19.121 }' 00:11:19.121 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.121 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.380 [2024-11-20 10:34:22.804102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.380 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.640 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.640 "name": "Existed_Raid", 00:11:19.640 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:19.640 "strip_size_kb": 64, 00:11:19.640 "state": "configuring", 00:11:19.640 "raid_level": "raid0", 00:11:19.640 "superblock": true, 00:11:19.640 "num_base_bdevs": 4, 00:11:19.640 "num_base_bdevs_discovered": 3, 00:11:19.640 "num_base_bdevs_operational": 4, 00:11:19.640 "base_bdevs_list": [ 00:11:19.640 { 00:11:19.640 "name": "BaseBdev1", 00:11:19.640 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:19.640 "is_configured": true, 00:11:19.640 "data_offset": 2048, 00:11:19.640 "data_size": 63488 00:11:19.640 }, 00:11:19.640 { 00:11:19.640 "name": null, 00:11:19.640 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:19.640 "is_configured": false, 00:11:19.640 "data_offset": 0, 00:11:19.640 "data_size": 63488 00:11:19.640 }, 00:11:19.640 { 00:11:19.640 "name": "BaseBdev3", 00:11:19.640 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:19.640 "is_configured": true, 00:11:19.640 "data_offset": 2048, 00:11:19.640 "data_size": 63488 00:11:19.640 }, 00:11:19.640 { 00:11:19.640 "name": "BaseBdev4", 00:11:19.640 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:19.640 "is_configured": true, 00:11:19.640 "data_offset": 2048, 00:11:19.640 "data_size": 63488 00:11:19.640 } 00:11:19.640 ] 00:11:19.640 }' 00:11:19.640 10:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.640 10:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.899 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.899 [2024-11-20 10:34:23.339292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.159 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.159 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.160 "name": "Existed_Raid", 00:11:20.160 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:20.160 "strip_size_kb": 64, 00:11:20.160 "state": "configuring", 00:11:20.160 "raid_level": "raid0", 00:11:20.160 "superblock": true, 00:11:20.160 "num_base_bdevs": 4, 00:11:20.160 "num_base_bdevs_discovered": 2, 00:11:20.160 "num_base_bdevs_operational": 4, 00:11:20.160 "base_bdevs_list": [ 00:11:20.160 { 00:11:20.160 "name": null, 00:11:20.160 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:20.160 "is_configured": false, 00:11:20.160 "data_offset": 0, 00:11:20.160 "data_size": 63488 00:11:20.160 }, 00:11:20.160 { 00:11:20.160 "name": null, 00:11:20.160 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:20.160 "is_configured": false, 00:11:20.160 "data_offset": 0, 00:11:20.160 "data_size": 63488 00:11:20.160 }, 00:11:20.160 { 00:11:20.160 "name": "BaseBdev3", 00:11:20.160 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:20.160 "is_configured": true, 00:11:20.160 "data_offset": 2048, 00:11:20.160 "data_size": 63488 00:11:20.160 }, 00:11:20.160 { 00:11:20.160 "name": "BaseBdev4", 00:11:20.160 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:20.160 "is_configured": true, 00:11:20.160 "data_offset": 2048, 00:11:20.160 "data_size": 63488 00:11:20.160 } 00:11:20.160 ] 00:11:20.160 }' 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.160 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.419 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.678 [2024-11-20 10:34:23.898367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.678 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.679 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.679 "name": "Existed_Raid", 00:11:20.679 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:20.679 "strip_size_kb": 64, 00:11:20.679 "state": "configuring", 00:11:20.679 "raid_level": "raid0", 00:11:20.679 "superblock": true, 00:11:20.679 "num_base_bdevs": 4, 00:11:20.679 "num_base_bdevs_discovered": 3, 00:11:20.679 "num_base_bdevs_operational": 4, 00:11:20.679 "base_bdevs_list": [ 00:11:20.679 { 00:11:20.679 "name": null, 00:11:20.679 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:20.679 "is_configured": false, 00:11:20.679 "data_offset": 0, 00:11:20.679 "data_size": 63488 00:11:20.679 }, 00:11:20.679 { 00:11:20.679 "name": "BaseBdev2", 00:11:20.679 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:20.679 "is_configured": true, 00:11:20.679 "data_offset": 2048, 00:11:20.679 "data_size": 63488 00:11:20.679 }, 00:11:20.679 { 00:11:20.679 "name": "BaseBdev3", 00:11:20.679 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:20.679 "is_configured": true, 00:11:20.679 "data_offset": 2048, 00:11:20.679 "data_size": 63488 00:11:20.679 }, 00:11:20.679 { 00:11:20.679 "name": "BaseBdev4", 00:11:20.679 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:20.679 "is_configured": true, 00:11:20.679 "data_offset": 2048, 00:11:20.679 "data_size": 63488 00:11:20.679 } 00:11:20.679 ] 00:11:20.679 }' 00:11:20.679 10:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.679 10:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.939 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.198 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4fff2071-97a9-4fff-b9b2-b8991d7fd371 00:11:21.198 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.198 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.198 [2024-11-20 10:34:24.474554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:21.198 [2024-11-20 10:34:24.474811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:21.198 [2024-11-20 10:34:24.474826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.199 [2024-11-20 10:34:24.475117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:21.199 [2024-11-20 10:34:24.475287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:21.199 [2024-11-20 10:34:24.475301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:21.199 [2024-11-20 10:34:24.475482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.199 NewBaseBdev 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 [ 00:11:21.199 { 00:11:21.199 "name": "NewBaseBdev", 00:11:21.199 "aliases": [ 00:11:21.199 "4fff2071-97a9-4fff-b9b2-b8991d7fd371" 00:11:21.199 ], 00:11:21.199 "product_name": "Malloc disk", 00:11:21.199 "block_size": 512, 00:11:21.199 "num_blocks": 65536, 00:11:21.199 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:21.199 "assigned_rate_limits": { 00:11:21.199 "rw_ios_per_sec": 0, 00:11:21.199 "rw_mbytes_per_sec": 0, 00:11:21.199 "r_mbytes_per_sec": 0, 00:11:21.199 "w_mbytes_per_sec": 0 00:11:21.199 }, 00:11:21.199 "claimed": true, 00:11:21.199 "claim_type": "exclusive_write", 00:11:21.199 "zoned": false, 00:11:21.199 "supported_io_types": { 00:11:21.199 "read": true, 00:11:21.199 "write": true, 00:11:21.199 "unmap": true, 00:11:21.199 "flush": true, 00:11:21.199 "reset": true, 00:11:21.199 "nvme_admin": false, 00:11:21.199 "nvme_io": false, 00:11:21.199 "nvme_io_md": false, 00:11:21.199 "write_zeroes": true, 00:11:21.199 "zcopy": true, 00:11:21.199 "get_zone_info": false, 00:11:21.199 "zone_management": false, 00:11:21.199 "zone_append": false, 00:11:21.199 "compare": false, 00:11:21.199 "compare_and_write": false, 00:11:21.199 "abort": true, 00:11:21.199 "seek_hole": false, 00:11:21.199 "seek_data": false, 00:11:21.199 "copy": true, 00:11:21.199 "nvme_iov_md": false 00:11:21.199 }, 00:11:21.199 "memory_domains": [ 00:11:21.199 { 00:11:21.199 "dma_device_id": "system", 00:11:21.199 "dma_device_type": 1 00:11:21.199 }, 00:11:21.199 { 00:11:21.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.199 "dma_device_type": 2 00:11:21.199 } 00:11:21.199 ], 00:11:21.199 "driver_specific": {} 00:11:21.199 } 00:11:21.199 ] 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.199 "name": "Existed_Raid", 00:11:21.199 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:21.199 "strip_size_kb": 64, 00:11:21.199 "state": "online", 00:11:21.199 "raid_level": "raid0", 00:11:21.199 "superblock": true, 00:11:21.199 "num_base_bdevs": 4, 00:11:21.199 "num_base_bdevs_discovered": 4, 00:11:21.199 "num_base_bdevs_operational": 4, 00:11:21.199 "base_bdevs_list": [ 00:11:21.199 { 00:11:21.199 "name": "NewBaseBdev", 00:11:21.199 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:21.199 "is_configured": true, 00:11:21.199 "data_offset": 2048, 00:11:21.199 "data_size": 63488 00:11:21.199 }, 00:11:21.199 { 00:11:21.199 "name": "BaseBdev2", 00:11:21.199 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:21.199 "is_configured": true, 00:11:21.199 "data_offset": 2048, 00:11:21.199 "data_size": 63488 00:11:21.199 }, 00:11:21.199 { 00:11:21.199 "name": "BaseBdev3", 00:11:21.199 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:21.199 "is_configured": true, 00:11:21.199 "data_offset": 2048, 00:11:21.199 "data_size": 63488 00:11:21.199 }, 00:11:21.199 { 00:11:21.199 "name": "BaseBdev4", 00:11:21.199 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:21.199 "is_configured": true, 00:11:21.199 "data_offset": 2048, 00:11:21.199 "data_size": 63488 00:11:21.199 } 00:11:21.199 ] 00:11:21.199 }' 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.199 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.770 [2024-11-20 10:34:24.966213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.770 10:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.770 "name": "Existed_Raid", 00:11:21.770 "aliases": [ 00:11:21.770 "20902c10-70cf-4008-8259-8075917b0492" 00:11:21.770 ], 00:11:21.770 "product_name": "Raid Volume", 00:11:21.770 "block_size": 512, 00:11:21.770 "num_blocks": 253952, 00:11:21.770 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:21.770 "assigned_rate_limits": { 00:11:21.770 "rw_ios_per_sec": 0, 00:11:21.770 "rw_mbytes_per_sec": 0, 00:11:21.770 "r_mbytes_per_sec": 0, 00:11:21.770 "w_mbytes_per_sec": 0 00:11:21.770 }, 00:11:21.770 "claimed": false, 00:11:21.770 "zoned": false, 00:11:21.770 "supported_io_types": { 00:11:21.770 "read": true, 00:11:21.770 "write": true, 00:11:21.770 "unmap": true, 00:11:21.770 "flush": true, 00:11:21.770 "reset": true, 00:11:21.770 "nvme_admin": false, 00:11:21.770 "nvme_io": false, 00:11:21.770 "nvme_io_md": false, 00:11:21.770 "write_zeroes": true, 00:11:21.770 "zcopy": false, 00:11:21.770 "get_zone_info": false, 00:11:21.770 "zone_management": false, 00:11:21.770 "zone_append": false, 00:11:21.770 "compare": false, 00:11:21.770 "compare_and_write": false, 00:11:21.770 "abort": false, 00:11:21.770 "seek_hole": false, 00:11:21.770 "seek_data": false, 00:11:21.770 "copy": false, 00:11:21.770 "nvme_iov_md": false 00:11:21.770 }, 00:11:21.770 "memory_domains": [ 00:11:21.770 { 00:11:21.770 "dma_device_id": "system", 00:11:21.770 "dma_device_type": 1 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.770 "dma_device_type": 2 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "system", 00:11:21.770 "dma_device_type": 1 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.770 "dma_device_type": 2 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "system", 00:11:21.770 "dma_device_type": 1 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.770 "dma_device_type": 2 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "system", 00:11:21.770 "dma_device_type": 1 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.770 "dma_device_type": 2 00:11:21.770 } 00:11:21.770 ], 00:11:21.770 "driver_specific": { 00:11:21.770 "raid": { 00:11:21.770 "uuid": "20902c10-70cf-4008-8259-8075917b0492", 00:11:21.770 "strip_size_kb": 64, 00:11:21.770 "state": "online", 00:11:21.770 "raid_level": "raid0", 00:11:21.770 "superblock": true, 00:11:21.770 "num_base_bdevs": 4, 00:11:21.770 "num_base_bdevs_discovered": 4, 00:11:21.770 "num_base_bdevs_operational": 4, 00:11:21.770 "base_bdevs_list": [ 00:11:21.770 { 00:11:21.770 "name": "NewBaseBdev", 00:11:21.770 "uuid": "4fff2071-97a9-4fff-b9b2-b8991d7fd371", 00:11:21.770 "is_configured": true, 00:11:21.770 "data_offset": 2048, 00:11:21.770 "data_size": 63488 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "name": "BaseBdev2", 00:11:21.770 "uuid": "c9c8be5b-6970-4481-8b0b-cd03bd3832cf", 00:11:21.770 "is_configured": true, 00:11:21.770 "data_offset": 2048, 00:11:21.770 "data_size": 63488 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "name": "BaseBdev3", 00:11:21.770 "uuid": "f2787794-baa8-40b9-8157-ec2670bf5dc4", 00:11:21.770 "is_configured": true, 00:11:21.770 "data_offset": 2048, 00:11:21.770 "data_size": 63488 00:11:21.770 }, 00:11:21.770 { 00:11:21.770 "name": "BaseBdev4", 00:11:21.770 "uuid": "20df6069-8464-4ef4-9696-fabc9d260861", 00:11:21.770 "is_configured": true, 00:11:21.770 "data_offset": 2048, 00:11:21.770 "data_size": 63488 00:11:21.770 } 00:11:21.770 ] 00:11:21.770 } 00:11:21.770 } 00:11:21.770 }' 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.770 BaseBdev2 00:11:21.770 BaseBdev3 00:11:21.770 BaseBdev4' 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.770 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.771 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.030 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.030 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.030 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.030 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.030 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.030 [2024-11-20 10:34:25.281274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.031 [2024-11-20 10:34:25.281388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.031 [2024-11-20 10:34:25.281488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.031 [2024-11-20 10:34:25.281567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.031 [2024-11-20 10:34:25.281578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70230 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70230 ']' 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70230 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70230 00:11:22.031 killing process with pid 70230 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70230' 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70230 00:11:22.031 [2024-11-20 10:34:25.322303] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.031 10:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70230 00:11:22.600 [2024-11-20 10:34:25.799145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.979 10:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.979 ************************************ 00:11:23.979 END TEST raid_state_function_test_sb 00:11:23.979 ************************************ 00:11:23.979 00:11:23.979 real 0m12.118s 00:11:23.979 user 0m19.018s 00:11:23.979 sys 0m2.036s 00:11:23.979 10:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.979 10:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.979 10:34:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:23.979 10:34:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.979 10:34:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.979 10:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.979 ************************************ 00:11:23.979 START TEST raid_superblock_test 00:11:23.979 ************************************ 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70908 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70908 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70908 ']' 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.979 10:34:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.979 [2024-11-20 10:34:27.268179] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:23.979 [2024-11-20 10:34:27.268437] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70908 ] 00:11:23.979 [2024-11-20 10:34:27.442077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.239 [2024-11-20 10:34:27.579127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.499 [2024-11-20 10:34:27.825287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.499 [2024-11-20 10:34:27.825445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.758 malloc1 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.758 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.758 [2024-11-20 10:34:28.229056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:24.758 [2024-11-20 10:34:28.229219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.758 [2024-11-20 10:34:28.229261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:24.758 [2024-11-20 10:34:28.229274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.758 [2024-11-20 10:34:28.231969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.758 [2024-11-20 10:34:28.232015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.019 pt1 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.019 malloc2 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.019 [2024-11-20 10:34:28.288533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.019 [2024-11-20 10:34:28.288673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.019 [2024-11-20 10:34:28.288731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:25.019 [2024-11-20 10:34:28.288796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.019 [2024-11-20 10:34:28.291327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.019 [2024-11-20 10:34:28.291445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.019 pt2 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.019 malloc3 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.019 [2024-11-20 10:34:28.365334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:25.019 [2024-11-20 10:34:28.365474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.019 [2024-11-20 10:34:28.365546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:25.019 [2024-11-20 10:34:28.365600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.019 [2024-11-20 10:34:28.368116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.019 [2024-11-20 10:34:28.368227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:25.019 pt3 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.019 malloc4 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.019 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.019 [2024-11-20 10:34:28.430899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:25.019 [2024-11-20 10:34:28.431038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.019 [2024-11-20 10:34:28.431106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:25.019 [2024-11-20 10:34:28.431151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.019 [2024-11-20 10:34:28.433670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.019 [2024-11-20 10:34:28.433763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:25.019 pt4 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.020 [2024-11-20 10:34:28.442919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.020 [2024-11-20 10:34:28.445110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.020 [2024-11-20 10:34:28.445256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:25.020 [2024-11-20 10:34:28.445398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:25.020 [2024-11-20 10:34:28.445681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:25.020 [2024-11-20 10:34:28.445738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:25.020 [2024-11-20 10:34:28.446111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.020 [2024-11-20 10:34:28.446376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:25.020 [2024-11-20 10:34:28.446434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:25.020 [2024-11-20 10:34:28.446697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.020 "name": "raid_bdev1", 00:11:25.020 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:25.020 "strip_size_kb": 64, 00:11:25.020 "state": "online", 00:11:25.020 "raid_level": "raid0", 00:11:25.020 "superblock": true, 00:11:25.020 "num_base_bdevs": 4, 00:11:25.020 "num_base_bdevs_discovered": 4, 00:11:25.020 "num_base_bdevs_operational": 4, 00:11:25.020 "base_bdevs_list": [ 00:11:25.020 { 00:11:25.020 "name": "pt1", 00:11:25.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.020 "is_configured": true, 00:11:25.020 "data_offset": 2048, 00:11:25.020 "data_size": 63488 00:11:25.020 }, 00:11:25.020 { 00:11:25.020 "name": "pt2", 00:11:25.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.020 "is_configured": true, 00:11:25.020 "data_offset": 2048, 00:11:25.020 "data_size": 63488 00:11:25.020 }, 00:11:25.020 { 00:11:25.020 "name": "pt3", 00:11:25.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.020 "is_configured": true, 00:11:25.020 "data_offset": 2048, 00:11:25.020 "data_size": 63488 00:11:25.020 }, 00:11:25.020 { 00:11:25.020 "name": "pt4", 00:11:25.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.020 "is_configured": true, 00:11:25.020 "data_offset": 2048, 00:11:25.020 "data_size": 63488 00:11:25.020 } 00:11:25.020 ] 00:11:25.020 }' 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.020 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.589 [2024-11-20 10:34:28.834687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.589 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.589 "name": "raid_bdev1", 00:11:25.589 "aliases": [ 00:11:25.589 "72d3bbda-553e-44fa-8884-bf9483fdc77a" 00:11:25.589 ], 00:11:25.589 "product_name": "Raid Volume", 00:11:25.589 "block_size": 512, 00:11:25.589 "num_blocks": 253952, 00:11:25.589 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:25.589 "assigned_rate_limits": { 00:11:25.589 "rw_ios_per_sec": 0, 00:11:25.589 "rw_mbytes_per_sec": 0, 00:11:25.589 "r_mbytes_per_sec": 0, 00:11:25.589 "w_mbytes_per_sec": 0 00:11:25.589 }, 00:11:25.589 "claimed": false, 00:11:25.589 "zoned": false, 00:11:25.589 "supported_io_types": { 00:11:25.589 "read": true, 00:11:25.589 "write": true, 00:11:25.589 "unmap": true, 00:11:25.589 "flush": true, 00:11:25.589 "reset": true, 00:11:25.589 "nvme_admin": false, 00:11:25.589 "nvme_io": false, 00:11:25.589 "nvme_io_md": false, 00:11:25.589 "write_zeroes": true, 00:11:25.589 "zcopy": false, 00:11:25.589 "get_zone_info": false, 00:11:25.589 "zone_management": false, 00:11:25.589 "zone_append": false, 00:11:25.589 "compare": false, 00:11:25.589 "compare_and_write": false, 00:11:25.589 "abort": false, 00:11:25.589 "seek_hole": false, 00:11:25.589 "seek_data": false, 00:11:25.589 "copy": false, 00:11:25.589 "nvme_iov_md": false 00:11:25.589 }, 00:11:25.590 "memory_domains": [ 00:11:25.590 { 00:11:25.590 "dma_device_id": "system", 00:11:25.590 "dma_device_type": 1 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.590 "dma_device_type": 2 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "system", 00:11:25.590 "dma_device_type": 1 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.590 "dma_device_type": 2 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "system", 00:11:25.590 "dma_device_type": 1 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.590 "dma_device_type": 2 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "system", 00:11:25.590 "dma_device_type": 1 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.590 "dma_device_type": 2 00:11:25.590 } 00:11:25.590 ], 00:11:25.590 "driver_specific": { 00:11:25.590 "raid": { 00:11:25.590 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:25.590 "strip_size_kb": 64, 00:11:25.590 "state": "online", 00:11:25.590 "raid_level": "raid0", 00:11:25.590 "superblock": true, 00:11:25.590 "num_base_bdevs": 4, 00:11:25.590 "num_base_bdevs_discovered": 4, 00:11:25.590 "num_base_bdevs_operational": 4, 00:11:25.590 "base_bdevs_list": [ 00:11:25.590 { 00:11:25.590 "name": "pt1", 00:11:25.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.590 "is_configured": true, 00:11:25.590 "data_offset": 2048, 00:11:25.590 "data_size": 63488 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "name": "pt2", 00:11:25.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.590 "is_configured": true, 00:11:25.590 "data_offset": 2048, 00:11:25.590 "data_size": 63488 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "name": "pt3", 00:11:25.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.590 "is_configured": true, 00:11:25.590 "data_offset": 2048, 00:11:25.590 "data_size": 63488 00:11:25.590 }, 00:11:25.590 { 00:11:25.590 "name": "pt4", 00:11:25.590 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.590 "is_configured": true, 00:11:25.590 "data_offset": 2048, 00:11:25.590 "data_size": 63488 00:11:25.590 } 00:11:25.590 ] 00:11:25.590 } 00:11:25.590 } 00:11:25.590 }' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:25.590 pt2 00:11:25.590 pt3 00:11:25.590 pt4' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 10:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.849 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:25.849 [2024-11-20 10:34:29.174088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=72d3bbda-553e-44fa-8884-bf9483fdc77a 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 72d3bbda-553e-44fa-8884-bf9483fdc77a ']' 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.850 [2024-11-20 10:34:29.221655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.850 [2024-11-20 10:34:29.221685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.850 [2024-11-20 10:34:29.221788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.850 [2024-11-20 10:34:29.221865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.850 [2024-11-20 10:34:29.221881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.850 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.109 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.109 [2024-11-20 10:34:29.393419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:26.109 [2024-11-20 10:34:29.395645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:26.109 [2024-11-20 10:34:29.395802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:26.109 [2024-11-20 10:34:29.395983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:26.109 [2024-11-20 10:34:29.396097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:26.110 [2024-11-20 10:34:29.396259] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:26.110 [2024-11-20 10:34:29.396351] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:26.110 [2024-11-20 10:34:29.396453] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:26.110 [2024-11-20 10:34:29.396508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.110 [2024-11-20 10:34:29.396555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:26.110 request: 00:11:26.110 { 00:11:26.110 "name": "raid_bdev1", 00:11:26.110 "raid_level": "raid0", 00:11:26.110 "base_bdevs": [ 00:11:26.110 "malloc1", 00:11:26.110 "malloc2", 00:11:26.110 "malloc3", 00:11:26.110 "malloc4" 00:11:26.110 ], 00:11:26.110 "strip_size_kb": 64, 00:11:26.110 "superblock": false, 00:11:26.110 "method": "bdev_raid_create", 00:11:26.110 "req_id": 1 00:11:26.110 } 00:11:26.110 Got JSON-RPC error response 00:11:26.110 response: 00:11:26.110 { 00:11:26.110 "code": -17, 00:11:26.110 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:26.110 } 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.110 [2024-11-20 10:34:29.469258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.110 [2024-11-20 10:34:29.469421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.110 [2024-11-20 10:34:29.469448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.110 [2024-11-20 10:34:29.469462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.110 [2024-11-20 10:34:29.472006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.110 [2024-11-20 10:34:29.472053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.110 [2024-11-20 10:34:29.472148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:26.110 [2024-11-20 10:34:29.472224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.110 pt1 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.110 "name": "raid_bdev1", 00:11:26.110 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:26.110 "strip_size_kb": 64, 00:11:26.110 "state": "configuring", 00:11:26.110 "raid_level": "raid0", 00:11:26.110 "superblock": true, 00:11:26.110 "num_base_bdevs": 4, 00:11:26.110 "num_base_bdevs_discovered": 1, 00:11:26.110 "num_base_bdevs_operational": 4, 00:11:26.110 "base_bdevs_list": [ 00:11:26.110 { 00:11:26.110 "name": "pt1", 00:11:26.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.110 "is_configured": true, 00:11:26.110 "data_offset": 2048, 00:11:26.110 "data_size": 63488 00:11:26.110 }, 00:11:26.110 { 00:11:26.110 "name": null, 00:11:26.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.110 "is_configured": false, 00:11:26.110 "data_offset": 2048, 00:11:26.110 "data_size": 63488 00:11:26.110 }, 00:11:26.110 { 00:11:26.110 "name": null, 00:11:26.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.110 "is_configured": false, 00:11:26.110 "data_offset": 2048, 00:11:26.110 "data_size": 63488 00:11:26.110 }, 00:11:26.110 { 00:11:26.110 "name": null, 00:11:26.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.110 "is_configured": false, 00:11:26.110 "data_offset": 2048, 00:11:26.110 "data_size": 63488 00:11:26.110 } 00:11:26.110 ] 00:11:26.110 }' 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.110 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 [2024-11-20 10:34:29.940539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.719 [2024-11-20 10:34:29.940701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.719 [2024-11-20 10:34:29.940746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:26.719 [2024-11-20 10:34:29.940784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.719 [2024-11-20 10:34:29.941309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.719 [2024-11-20 10:34:29.941403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.719 [2024-11-20 10:34:29.941536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.719 [2024-11-20 10:34:29.941599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.719 pt2 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 [2024-11-20 10:34:29.952550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 10:34:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.719 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.719 "name": "raid_bdev1", 00:11:26.719 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:26.719 "strip_size_kb": 64, 00:11:26.719 "state": "configuring", 00:11:26.719 "raid_level": "raid0", 00:11:26.719 "superblock": true, 00:11:26.719 "num_base_bdevs": 4, 00:11:26.719 "num_base_bdevs_discovered": 1, 00:11:26.719 "num_base_bdevs_operational": 4, 00:11:26.719 "base_bdevs_list": [ 00:11:26.719 { 00:11:26.719 "name": "pt1", 00:11:26.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.719 "is_configured": true, 00:11:26.719 "data_offset": 2048, 00:11:26.719 "data_size": 63488 00:11:26.719 }, 00:11:26.719 { 00:11:26.719 "name": null, 00:11:26.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.719 "is_configured": false, 00:11:26.719 "data_offset": 0, 00:11:26.719 "data_size": 63488 00:11:26.719 }, 00:11:26.719 { 00:11:26.719 "name": null, 00:11:26.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.719 "is_configured": false, 00:11:26.719 "data_offset": 2048, 00:11:26.719 "data_size": 63488 00:11:26.719 }, 00:11:26.719 { 00:11:26.719 "name": null, 00:11:26.719 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.719 "is_configured": false, 00:11:26.719 "data_offset": 2048, 00:11:26.719 "data_size": 63488 00:11:26.719 } 00:11:26.719 ] 00:11:26.719 }' 00:11:26.720 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.720 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.019 [2024-11-20 10:34:30.447677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.019 [2024-11-20 10:34:30.447756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.019 [2024-11-20 10:34:30.447779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:27.019 [2024-11-20 10:34:30.447790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.019 [2024-11-20 10:34:30.448271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.019 [2024-11-20 10:34:30.448303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.019 [2024-11-20 10:34:30.448409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.019 [2024-11-20 10:34:30.448436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.019 pt2 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.019 [2024-11-20 10:34:30.459622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.019 [2024-11-20 10:34:30.459677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.019 [2024-11-20 10:34:30.459703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:27.019 [2024-11-20 10:34:30.459715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.019 [2024-11-20 10:34:30.460132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.019 [2024-11-20 10:34:30.460163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.019 [2024-11-20 10:34:30.460237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:27.019 [2024-11-20 10:34:30.460257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.019 pt3 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.019 [2024-11-20 10:34:30.471596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.019 [2024-11-20 10:34:30.471654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.019 [2024-11-20 10:34:30.471677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:27.019 [2024-11-20 10:34:30.471687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.019 [2024-11-20 10:34:30.472109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.019 [2024-11-20 10:34:30.472127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.019 [2024-11-20 10:34:30.472200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:27.019 [2024-11-20 10:34:30.472221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.019 [2024-11-20 10:34:30.472404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:27.019 [2024-11-20 10:34:30.472415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.019 [2024-11-20 10:34:30.472680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:27.019 [2024-11-20 10:34:30.472864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:27.019 [2024-11-20 10:34:30.472880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:27.019 [2024-11-20 10:34:30.473031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.019 pt4 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.019 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.276 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.276 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.276 "name": "raid_bdev1", 00:11:27.276 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:27.276 "strip_size_kb": 64, 00:11:27.276 "state": "online", 00:11:27.276 "raid_level": "raid0", 00:11:27.276 "superblock": true, 00:11:27.276 "num_base_bdevs": 4, 00:11:27.276 "num_base_bdevs_discovered": 4, 00:11:27.276 "num_base_bdevs_operational": 4, 00:11:27.276 "base_bdevs_list": [ 00:11:27.276 { 00:11:27.276 "name": "pt1", 00:11:27.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.276 "is_configured": true, 00:11:27.276 "data_offset": 2048, 00:11:27.276 "data_size": 63488 00:11:27.276 }, 00:11:27.276 { 00:11:27.276 "name": "pt2", 00:11:27.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.276 "is_configured": true, 00:11:27.276 "data_offset": 2048, 00:11:27.276 "data_size": 63488 00:11:27.276 }, 00:11:27.276 { 00:11:27.276 "name": "pt3", 00:11:27.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.276 "is_configured": true, 00:11:27.276 "data_offset": 2048, 00:11:27.276 "data_size": 63488 00:11:27.276 }, 00:11:27.276 { 00:11:27.276 "name": "pt4", 00:11:27.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.276 "is_configured": true, 00:11:27.276 "data_offset": 2048, 00:11:27.276 "data_size": 63488 00:11:27.276 } 00:11:27.276 ] 00:11:27.276 }' 00:11:27.276 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.276 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.535 [2024-11-20 10:34:30.931264] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.535 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.535 "name": "raid_bdev1", 00:11:27.535 "aliases": [ 00:11:27.535 "72d3bbda-553e-44fa-8884-bf9483fdc77a" 00:11:27.535 ], 00:11:27.535 "product_name": "Raid Volume", 00:11:27.535 "block_size": 512, 00:11:27.535 "num_blocks": 253952, 00:11:27.535 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:27.535 "assigned_rate_limits": { 00:11:27.535 "rw_ios_per_sec": 0, 00:11:27.535 "rw_mbytes_per_sec": 0, 00:11:27.535 "r_mbytes_per_sec": 0, 00:11:27.535 "w_mbytes_per_sec": 0 00:11:27.535 }, 00:11:27.535 "claimed": false, 00:11:27.535 "zoned": false, 00:11:27.535 "supported_io_types": { 00:11:27.535 "read": true, 00:11:27.535 "write": true, 00:11:27.535 "unmap": true, 00:11:27.535 "flush": true, 00:11:27.535 "reset": true, 00:11:27.535 "nvme_admin": false, 00:11:27.535 "nvme_io": false, 00:11:27.535 "nvme_io_md": false, 00:11:27.535 "write_zeroes": true, 00:11:27.535 "zcopy": false, 00:11:27.535 "get_zone_info": false, 00:11:27.535 "zone_management": false, 00:11:27.535 "zone_append": false, 00:11:27.535 "compare": false, 00:11:27.535 "compare_and_write": false, 00:11:27.535 "abort": false, 00:11:27.535 "seek_hole": false, 00:11:27.535 "seek_data": false, 00:11:27.535 "copy": false, 00:11:27.535 "nvme_iov_md": false 00:11:27.535 }, 00:11:27.535 "memory_domains": [ 00:11:27.535 { 00:11:27.535 "dma_device_id": "system", 00:11:27.535 "dma_device_type": 1 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.535 "dma_device_type": 2 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "system", 00:11:27.535 "dma_device_type": 1 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.535 "dma_device_type": 2 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "system", 00:11:27.535 "dma_device_type": 1 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.535 "dma_device_type": 2 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "system", 00:11:27.535 "dma_device_type": 1 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.535 "dma_device_type": 2 00:11:27.535 } 00:11:27.535 ], 00:11:27.535 "driver_specific": { 00:11:27.535 "raid": { 00:11:27.535 "uuid": "72d3bbda-553e-44fa-8884-bf9483fdc77a", 00:11:27.535 "strip_size_kb": 64, 00:11:27.535 "state": "online", 00:11:27.535 "raid_level": "raid0", 00:11:27.535 "superblock": true, 00:11:27.535 "num_base_bdevs": 4, 00:11:27.535 "num_base_bdevs_discovered": 4, 00:11:27.535 "num_base_bdevs_operational": 4, 00:11:27.535 "base_bdevs_list": [ 00:11:27.535 { 00:11:27.535 "name": "pt1", 00:11:27.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.535 "is_configured": true, 00:11:27.535 "data_offset": 2048, 00:11:27.535 "data_size": 63488 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "name": "pt2", 00:11:27.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.535 "is_configured": true, 00:11:27.535 "data_offset": 2048, 00:11:27.535 "data_size": 63488 00:11:27.535 }, 00:11:27.535 { 00:11:27.535 "name": "pt3", 00:11:27.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.535 "is_configured": true, 00:11:27.535 "data_offset": 2048, 00:11:27.535 "data_size": 63488 00:11:27.535 }, 00:11:27.535 { 00:11:27.536 "name": "pt4", 00:11:27.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.536 "is_configured": true, 00:11:27.536 "data_offset": 2048, 00:11:27.536 "data_size": 63488 00:11:27.536 } 00:11:27.536 ] 00:11:27.536 } 00:11:27.536 } 00:11:27.536 }' 00:11:27.536 10:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.536 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.536 pt2 00:11:27.536 pt3 00:11:27.536 pt4' 00:11:27.536 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.793 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.793 [2024-11-20 10:34:31.254685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 72d3bbda-553e-44fa-8884-bf9483fdc77a '!=' 72d3bbda-553e-44fa-8884-bf9483fdc77a ']' 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70908 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70908 ']' 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70908 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70908 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70908' 00:11:28.053 killing process with pid 70908 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70908 00:11:28.053 [2024-11-20 10:34:31.313835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.053 10:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70908 00:11:28.053 [2024-11-20 10:34:31.313993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.053 [2024-11-20 10:34:31.314113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.053 [2024-11-20 10:34:31.314167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:28.311 [2024-11-20 10:34:31.782398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.689 10:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:29.689 00:11:29.689 real 0m5.906s 00:11:29.689 user 0m8.348s 00:11:29.689 sys 0m0.969s 00:11:29.689 10:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.689 ************************************ 00:11:29.689 END TEST raid_superblock_test 00:11:29.689 ************************************ 00:11:29.689 10:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.689 10:34:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:29.689 10:34:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.689 10:34:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.689 10:34:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.689 ************************************ 00:11:29.689 START TEST raid_read_error_test 00:11:29.689 ************************************ 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.689 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jd4KatAn4c 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71177 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71177 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71177 ']' 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.690 10:34:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.948 [2024-11-20 10:34:33.250854] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:29.948 [2024-11-20 10:34:33.251081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71177 ] 00:11:30.214 [2024-11-20 10:34:33.431727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.214 [2024-11-20 10:34:33.569388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.485 [2024-11-20 10:34:33.798209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.485 [2024-11-20 10:34:33.798275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.744 BaseBdev1_malloc 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.744 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 true 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 [2024-11-20 10:34:34.234618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:31.004 [2024-11-20 10:34:34.234752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.004 [2024-11-20 10:34:34.234805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:31.004 [2024-11-20 10:34:34.234848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.004 [2024-11-20 10:34:34.237448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.004 [2024-11-20 10:34:34.237536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.004 BaseBdev1 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 BaseBdev2_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 true 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 [2024-11-20 10:34:34.309813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.004 [2024-11-20 10:34:34.309879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.004 [2024-11-20 10:34:34.309900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:31.004 [2024-11-20 10:34:34.309911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.004 [2024-11-20 10:34:34.312390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.004 [2024-11-20 10:34:34.312435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.004 BaseBdev2 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 BaseBdev3_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 true 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 [2024-11-20 10:34:34.392873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:31.004 [2024-11-20 10:34:34.392939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.004 [2024-11-20 10:34:34.392960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:31.004 [2024-11-20 10:34:34.392973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.004 [2024-11-20 10:34:34.395426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.004 [2024-11-20 10:34:34.395542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:31.004 BaseBdev3 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 BaseBdev4_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.004 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.004 true 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.005 [2024-11-20 10:34:34.458673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:31.005 [2024-11-20 10:34:34.458735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.005 [2024-11-20 10:34:34.458758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:31.005 [2024-11-20 10:34:34.458771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.005 [2024-11-20 10:34:34.461284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.005 [2024-11-20 10:34:34.461338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:31.005 BaseBdev4 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.005 [2024-11-20 10:34:34.470729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.005 [2024-11-20 10:34:34.472890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.005 [2024-11-20 10:34:34.472983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.005 [2024-11-20 10:34:34.473062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.005 [2024-11-20 10:34:34.473325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:31.005 [2024-11-20 10:34:34.473343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:31.005 [2024-11-20 10:34:34.473659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:31.005 [2024-11-20 10:34:34.473846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:31.005 [2024-11-20 10:34:34.473859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:31.005 [2024-11-20 10:34:34.474075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.005 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.264 "name": "raid_bdev1", 00:11:31.264 "uuid": "04bdcca7-2f06-4309-ab1f-0d874b83d63a", 00:11:31.264 "strip_size_kb": 64, 00:11:31.264 "state": "online", 00:11:31.264 "raid_level": "raid0", 00:11:31.264 "superblock": true, 00:11:31.264 "num_base_bdevs": 4, 00:11:31.264 "num_base_bdevs_discovered": 4, 00:11:31.264 "num_base_bdevs_operational": 4, 00:11:31.264 "base_bdevs_list": [ 00:11:31.264 { 00:11:31.264 "name": "BaseBdev1", 00:11:31.264 "uuid": "b01537ff-2fbc-52d7-a5c0-b2a852727c19", 00:11:31.264 "is_configured": true, 00:11:31.264 "data_offset": 2048, 00:11:31.264 "data_size": 63488 00:11:31.264 }, 00:11:31.264 { 00:11:31.264 "name": "BaseBdev2", 00:11:31.264 "uuid": "24906cae-dce3-5301-956f-ab3b17a16fcf", 00:11:31.264 "is_configured": true, 00:11:31.264 "data_offset": 2048, 00:11:31.264 "data_size": 63488 00:11:31.264 }, 00:11:31.264 { 00:11:31.264 "name": "BaseBdev3", 00:11:31.264 "uuid": "d60adb2d-08e0-5220-bb16-ccda49b37be9", 00:11:31.264 "is_configured": true, 00:11:31.264 "data_offset": 2048, 00:11:31.264 "data_size": 63488 00:11:31.264 }, 00:11:31.264 { 00:11:31.264 "name": "BaseBdev4", 00:11:31.264 "uuid": "16792936-5a91-5d0f-84c3-17318926e1df", 00:11:31.264 "is_configured": true, 00:11:31.264 "data_offset": 2048, 00:11:31.264 "data_size": 63488 00:11:31.264 } 00:11:31.264 ] 00:11:31.264 }' 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.264 10:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.524 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:31.524 10:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:31.783 [2024-11-20 10:34:35.067469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.722 10:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.722 10:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.722 "name": "raid_bdev1", 00:11:32.722 "uuid": "04bdcca7-2f06-4309-ab1f-0d874b83d63a", 00:11:32.722 "strip_size_kb": 64, 00:11:32.722 "state": "online", 00:11:32.722 "raid_level": "raid0", 00:11:32.722 "superblock": true, 00:11:32.722 "num_base_bdevs": 4, 00:11:32.722 "num_base_bdevs_discovered": 4, 00:11:32.722 "num_base_bdevs_operational": 4, 00:11:32.722 "base_bdevs_list": [ 00:11:32.722 { 00:11:32.722 "name": "BaseBdev1", 00:11:32.722 "uuid": "b01537ff-2fbc-52d7-a5c0-b2a852727c19", 00:11:32.722 "is_configured": true, 00:11:32.722 "data_offset": 2048, 00:11:32.722 "data_size": 63488 00:11:32.722 }, 00:11:32.722 { 00:11:32.722 "name": "BaseBdev2", 00:11:32.722 "uuid": "24906cae-dce3-5301-956f-ab3b17a16fcf", 00:11:32.722 "is_configured": true, 00:11:32.722 "data_offset": 2048, 00:11:32.722 "data_size": 63488 00:11:32.722 }, 00:11:32.722 { 00:11:32.722 "name": "BaseBdev3", 00:11:32.722 "uuid": "d60adb2d-08e0-5220-bb16-ccda49b37be9", 00:11:32.722 "is_configured": true, 00:11:32.722 "data_offset": 2048, 00:11:32.722 "data_size": 63488 00:11:32.722 }, 00:11:32.722 { 00:11:32.722 "name": "BaseBdev4", 00:11:32.722 "uuid": "16792936-5a91-5d0f-84c3-17318926e1df", 00:11:32.722 "is_configured": true, 00:11:32.722 "data_offset": 2048, 00:11:32.722 "data_size": 63488 00:11:32.722 } 00:11:32.722 ] 00:11:32.722 }' 00:11:32.722 10:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.722 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 [2024-11-20 10:34:36.428225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.982 [2024-11-20 10:34:36.428325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.982 [2024-11-20 10:34:36.431275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.982 [2024-11-20 10:34:36.431393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.982 [2024-11-20 10:34:36.431466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.982 [2024-11-20 10:34:36.431515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.982 { 00:11:32.982 "results": [ 00:11:32.982 { 00:11:32.982 "job": "raid_bdev1", 00:11:32.982 "core_mask": "0x1", 00:11:32.982 "workload": "randrw", 00:11:32.982 "percentage": 50, 00:11:32.982 "status": "finished", 00:11:32.982 "queue_depth": 1, 00:11:32.982 "io_size": 131072, 00:11:32.982 "runtime": 1.361273, 00:11:32.982 "iops": 13570.385954911322, 00:11:32.982 "mibps": 1696.2982443639153, 00:11:32.982 "io_failed": 1, 00:11:32.982 "io_timeout": 0, 00:11:32.982 "avg_latency_us": 102.30233317401583, 00:11:32.982 "min_latency_us": 29.065502183406114, 00:11:32.982 "max_latency_us": 1745.7187772925763 00:11:32.982 } 00:11:32.982 ], 00:11:32.982 "core_count": 1 00:11:32.982 } 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71177 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71177 ']' 00:11:32.982 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71177 00:11:32.983 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:32.983 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.983 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71177 00:11:33.241 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.241 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.241 killing process with pid 71177 00:11:33.241 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71177' 00:11:33.241 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71177 00:11:33.241 [2024-11-20 10:34:36.478791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.241 10:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71177 00:11:33.500 [2024-11-20 10:34:36.837189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jd4KatAn4c 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:34.936 00:11:34.936 real 0m4.950s 00:11:34.936 user 0m5.893s 00:11:34.936 sys 0m0.637s 00:11:34.936 ************************************ 00:11:34.936 END TEST raid_read_error_test 00:11:34.936 ************************************ 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.936 10:34:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.936 10:34:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:34.936 10:34:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.936 10:34:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.936 10:34:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.936 ************************************ 00:11:34.936 START TEST raid_write_error_test 00:11:34.937 ************************************ 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Loi5dGMvCM 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71324 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71324 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71324 ']' 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.937 10:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.937 [2024-11-20 10:34:38.271575] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:34.937 [2024-11-20 10:34:38.271804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71324 ] 00:11:35.195 [2024-11-20 10:34:38.448314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.195 [2024-11-20 10:34:38.571921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.455 [2024-11-20 10:34:38.787358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.455 [2024-11-20 10:34:38.787486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 BaseBdev1_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 true 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 [2024-11-20 10:34:39.265424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.021 [2024-11-20 10:34:39.265483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.021 [2024-11-20 10:34:39.265503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.021 [2024-11-20 10:34:39.265514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.021 [2024-11-20 10:34:39.267738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.021 [2024-11-20 10:34:39.267798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.021 BaseBdev1 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 BaseBdev2_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 true 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 [2024-11-20 10:34:39.336421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.021 [2024-11-20 10:34:39.336482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.021 [2024-11-20 10:34:39.336502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.021 [2024-11-20 10:34:39.336513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.021 [2024-11-20 10:34:39.338822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.021 [2024-11-20 10:34:39.338866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.021 BaseBdev2 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 BaseBdev3_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.021 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 true 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 [2024-11-20 10:34:39.407847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.022 [2024-11-20 10:34:39.407908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.022 [2024-11-20 10:34:39.407928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.022 [2024-11-20 10:34:39.407940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.022 [2024-11-20 10:34:39.410253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.022 [2024-11-20 10:34:39.410293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.022 BaseBdev3 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 BaseBdev4_malloc 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 true 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 [2024-11-20 10:34:39.470547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:36.022 [2024-11-20 10:34:39.470610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.022 [2024-11-20 10:34:39.470632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.022 [2024-11-20 10:34:39.470644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.022 [2024-11-20 10:34:39.472977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.022 [2024-11-20 10:34:39.473022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.022 BaseBdev4 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 [2024-11-20 10:34:39.478593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.022 [2024-11-20 10:34:39.480627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.022 [2024-11-20 10:34:39.480762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.022 [2024-11-20 10:34:39.480875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.022 [2024-11-20 10:34:39.481186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:36.022 [2024-11-20 10:34:39.481248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.022 [2024-11-20 10:34:39.481579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:36.022 [2024-11-20 10:34:39.481809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:36.022 [2024-11-20 10:34:39.481869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:36.022 [2024-11-20 10:34:39.482106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.022 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.281 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.281 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.281 "name": "raid_bdev1", 00:11:36.281 "uuid": "a8826db2-9c90-462e-a1a8-38965fe57a01", 00:11:36.281 "strip_size_kb": 64, 00:11:36.281 "state": "online", 00:11:36.281 "raid_level": "raid0", 00:11:36.281 "superblock": true, 00:11:36.281 "num_base_bdevs": 4, 00:11:36.281 "num_base_bdevs_discovered": 4, 00:11:36.281 "num_base_bdevs_operational": 4, 00:11:36.281 "base_bdevs_list": [ 00:11:36.281 { 00:11:36.281 "name": "BaseBdev1", 00:11:36.281 "uuid": "32f34ede-dccd-5add-9601-3d3d92419cac", 00:11:36.281 "is_configured": true, 00:11:36.281 "data_offset": 2048, 00:11:36.281 "data_size": 63488 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "BaseBdev2", 00:11:36.281 "uuid": "cd4e91b1-d81c-5d50-94ca-af4366ee8674", 00:11:36.281 "is_configured": true, 00:11:36.281 "data_offset": 2048, 00:11:36.281 "data_size": 63488 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "BaseBdev3", 00:11:36.281 "uuid": "4027c320-51d7-5dd2-b461-8c850cdead5e", 00:11:36.281 "is_configured": true, 00:11:36.281 "data_offset": 2048, 00:11:36.281 "data_size": 63488 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "BaseBdev4", 00:11:36.281 "uuid": "b0f46e5e-f840-5710-9dd8-e8ac7aacbc4c", 00:11:36.281 "is_configured": true, 00:11:36.281 "data_offset": 2048, 00:11:36.281 "data_size": 63488 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 }' 00:11:36.281 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.281 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.541 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:36.541 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:36.541 [2024-11-20 10:34:39.983305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.479 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.479 "name": "raid_bdev1", 00:11:37.479 "uuid": "a8826db2-9c90-462e-a1a8-38965fe57a01", 00:11:37.479 "strip_size_kb": 64, 00:11:37.479 "state": "online", 00:11:37.479 "raid_level": "raid0", 00:11:37.479 "superblock": true, 00:11:37.479 "num_base_bdevs": 4, 00:11:37.479 "num_base_bdevs_discovered": 4, 00:11:37.479 "num_base_bdevs_operational": 4, 00:11:37.479 "base_bdevs_list": [ 00:11:37.479 { 00:11:37.479 "name": "BaseBdev1", 00:11:37.479 "uuid": "32f34ede-dccd-5add-9601-3d3d92419cac", 00:11:37.479 "is_configured": true, 00:11:37.479 "data_offset": 2048, 00:11:37.479 "data_size": 63488 00:11:37.479 }, 00:11:37.479 { 00:11:37.479 "name": "BaseBdev2", 00:11:37.479 "uuid": "cd4e91b1-d81c-5d50-94ca-af4366ee8674", 00:11:37.479 "is_configured": true, 00:11:37.479 "data_offset": 2048, 00:11:37.479 "data_size": 63488 00:11:37.479 }, 00:11:37.479 { 00:11:37.479 "name": "BaseBdev3", 00:11:37.479 "uuid": "4027c320-51d7-5dd2-b461-8c850cdead5e", 00:11:37.479 "is_configured": true, 00:11:37.479 "data_offset": 2048, 00:11:37.479 "data_size": 63488 00:11:37.479 }, 00:11:37.479 { 00:11:37.479 "name": "BaseBdev4", 00:11:37.479 "uuid": "b0f46e5e-f840-5710-9dd8-e8ac7aacbc4c", 00:11:37.479 "is_configured": true, 00:11:37.479 "data_offset": 2048, 00:11:37.479 "data_size": 63488 00:11:37.479 } 00:11:37.479 ] 00:11:37.479 }' 00:11:37.739 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.739 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.998 [2024-11-20 10:34:41.362570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.998 [2024-11-20 10:34:41.362659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.998 [2024-11-20 10:34:41.365590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.998 [2024-11-20 10:34:41.365703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.998 [2024-11-20 10:34:41.365774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.998 [2024-11-20 10:34:41.365826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71324 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71324 ']' 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71324 00:11:37.998 { 00:11:37.998 "results": [ 00:11:37.998 { 00:11:37.998 "job": "raid_bdev1", 00:11:37.998 "core_mask": "0x1", 00:11:37.998 "workload": "randrw", 00:11:37.998 "percentage": 50, 00:11:37.998 "status": "finished", 00:11:37.998 "queue_depth": 1, 00:11:37.998 "io_size": 131072, 00:11:37.998 "runtime": 1.379707, 00:11:37.998 "iops": 14553.089895173396, 00:11:37.998 "mibps": 1819.1362368966745, 00:11:37.998 "io_failed": 1, 00:11:37.998 "io_timeout": 0, 00:11:37.998 "avg_latency_us": 95.47456845108648, 00:11:37.998 "min_latency_us": 27.388646288209607, 00:11:37.998 "max_latency_us": 1459.5353711790392 00:11:37.998 } 00:11:37.998 ], 00:11:37.998 "core_count": 1 00:11:37.998 } 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71324 00:11:37.998 killing process with pid 71324 00:11:37.998 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.999 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.999 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71324' 00:11:37.999 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71324 00:11:37.999 [2024-11-20 10:34:41.408807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.999 10:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71324 00:11:38.567 [2024-11-20 10:34:41.742040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Loi5dGMvCM 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:39.505 ************************************ 00:11:39.505 END TEST raid_write_error_test 00:11:39.505 ************************************ 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:39.505 00:11:39.505 real 0m4.787s 00:11:39.505 user 0m5.686s 00:11:39.505 sys 0m0.599s 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.505 10:34:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.766 10:34:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:39.766 10:34:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:39.766 10:34:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:39.766 10:34:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.766 10:34:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.766 ************************************ 00:11:39.766 START TEST raid_state_function_test 00:11:39.766 ************************************ 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71465 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71465' 00:11:39.766 Process raid pid: 71465 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71465 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71465 ']' 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.766 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.766 [2024-11-20 10:34:43.122558] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:39.766 [2024-11-20 10:34:43.122747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.026 [2024-11-20 10:34:43.302016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.026 [2024-11-20 10:34:43.423062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.286 [2024-11-20 10:34:43.633025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.286 [2024-11-20 10:34:43.633071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.545 [2024-11-20 10:34:43.962324] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.545 [2024-11-20 10:34:43.962417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.545 [2024-11-20 10:34:43.962431] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.545 [2024-11-20 10:34:43.962442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.545 [2024-11-20 10:34:43.962455] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.545 [2024-11-20 10:34:43.962466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.545 [2024-11-20 10:34:43.962473] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.545 [2024-11-20 10:34:43.962483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.545 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.546 10:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.546 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.546 "name": "Existed_Raid", 00:11:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.546 "strip_size_kb": 64, 00:11:40.546 "state": "configuring", 00:11:40.546 "raid_level": "concat", 00:11:40.546 "superblock": false, 00:11:40.546 "num_base_bdevs": 4, 00:11:40.546 "num_base_bdevs_discovered": 0, 00:11:40.546 "num_base_bdevs_operational": 4, 00:11:40.546 "base_bdevs_list": [ 00:11:40.546 { 00:11:40.546 "name": "BaseBdev1", 00:11:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.546 "is_configured": false, 00:11:40.546 "data_offset": 0, 00:11:40.546 "data_size": 0 00:11:40.546 }, 00:11:40.546 { 00:11:40.546 "name": "BaseBdev2", 00:11:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.546 "is_configured": false, 00:11:40.546 "data_offset": 0, 00:11:40.546 "data_size": 0 00:11:40.546 }, 00:11:40.546 { 00:11:40.546 "name": "BaseBdev3", 00:11:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.546 "is_configured": false, 00:11:40.546 "data_offset": 0, 00:11:40.546 "data_size": 0 00:11:40.546 }, 00:11:40.546 { 00:11:40.546 "name": "BaseBdev4", 00:11:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.546 "is_configured": false, 00:11:40.546 "data_offset": 0, 00:11:40.546 "data_size": 0 00:11:40.546 } 00:11:40.546 ] 00:11:40.546 }' 00:11:40.546 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.546 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.115 [2024-11-20 10:34:44.409537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.115 [2024-11-20 10:34:44.409650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.115 [2024-11-20 10:34:44.421549] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.115 [2024-11-20 10:34:44.421648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.115 [2024-11-20 10:34:44.421683] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.115 [2024-11-20 10:34:44.421711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.115 [2024-11-20 10:34:44.421732] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.115 [2024-11-20 10:34:44.421773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.115 [2024-11-20 10:34:44.421802] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.115 [2024-11-20 10:34:44.421830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.115 [2024-11-20 10:34:44.471321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.115 BaseBdev1 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.115 [ 00:11:41.115 { 00:11:41.115 "name": "BaseBdev1", 00:11:41.115 "aliases": [ 00:11:41.115 "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e" 00:11:41.115 ], 00:11:41.115 "product_name": "Malloc disk", 00:11:41.115 "block_size": 512, 00:11:41.115 "num_blocks": 65536, 00:11:41.115 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:41.115 "assigned_rate_limits": { 00:11:41.115 "rw_ios_per_sec": 0, 00:11:41.115 "rw_mbytes_per_sec": 0, 00:11:41.115 "r_mbytes_per_sec": 0, 00:11:41.115 "w_mbytes_per_sec": 0 00:11:41.115 }, 00:11:41.115 "claimed": true, 00:11:41.115 "claim_type": "exclusive_write", 00:11:41.115 "zoned": false, 00:11:41.115 "supported_io_types": { 00:11:41.115 "read": true, 00:11:41.115 "write": true, 00:11:41.115 "unmap": true, 00:11:41.115 "flush": true, 00:11:41.115 "reset": true, 00:11:41.115 "nvme_admin": false, 00:11:41.115 "nvme_io": false, 00:11:41.115 "nvme_io_md": false, 00:11:41.115 "write_zeroes": true, 00:11:41.115 "zcopy": true, 00:11:41.115 "get_zone_info": false, 00:11:41.115 "zone_management": false, 00:11:41.115 "zone_append": false, 00:11:41.115 "compare": false, 00:11:41.115 "compare_and_write": false, 00:11:41.115 "abort": true, 00:11:41.115 "seek_hole": false, 00:11:41.115 "seek_data": false, 00:11:41.115 "copy": true, 00:11:41.115 "nvme_iov_md": false 00:11:41.115 }, 00:11:41.115 "memory_domains": [ 00:11:41.115 { 00:11:41.115 "dma_device_id": "system", 00:11:41.115 "dma_device_type": 1 00:11:41.115 }, 00:11:41.115 { 00:11:41.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.115 "dma_device_type": 2 00:11:41.115 } 00:11:41.115 ], 00:11:41.115 "driver_specific": {} 00:11:41.115 } 00:11:41.115 ] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.115 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.116 "name": "Existed_Raid", 00:11:41.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.116 "strip_size_kb": 64, 00:11:41.116 "state": "configuring", 00:11:41.116 "raid_level": "concat", 00:11:41.116 "superblock": false, 00:11:41.116 "num_base_bdevs": 4, 00:11:41.116 "num_base_bdevs_discovered": 1, 00:11:41.116 "num_base_bdevs_operational": 4, 00:11:41.116 "base_bdevs_list": [ 00:11:41.116 { 00:11:41.116 "name": "BaseBdev1", 00:11:41.116 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:41.116 "is_configured": true, 00:11:41.116 "data_offset": 0, 00:11:41.116 "data_size": 65536 00:11:41.116 }, 00:11:41.116 { 00:11:41.116 "name": "BaseBdev2", 00:11:41.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.116 "is_configured": false, 00:11:41.116 "data_offset": 0, 00:11:41.116 "data_size": 0 00:11:41.116 }, 00:11:41.116 { 00:11:41.116 "name": "BaseBdev3", 00:11:41.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.116 "is_configured": false, 00:11:41.116 "data_offset": 0, 00:11:41.116 "data_size": 0 00:11:41.116 }, 00:11:41.116 { 00:11:41.116 "name": "BaseBdev4", 00:11:41.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.116 "is_configured": false, 00:11:41.116 "data_offset": 0, 00:11:41.116 "data_size": 0 00:11:41.116 } 00:11:41.116 ] 00:11:41.116 }' 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.116 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.684 [2024-11-20 10:34:44.922647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.684 [2024-11-20 10:34:44.922713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.684 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.684 [2024-11-20 10:34:44.934671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.684 [2024-11-20 10:34:44.936779] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.684 [2024-11-20 10:34:44.936830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.684 [2024-11-20 10:34:44.936842] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.684 [2024-11-20 10:34:44.936855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.684 [2024-11-20 10:34:44.936863] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.685 [2024-11-20 10:34:44.936873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.685 "name": "Existed_Raid", 00:11:41.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.685 "strip_size_kb": 64, 00:11:41.685 "state": "configuring", 00:11:41.685 "raid_level": "concat", 00:11:41.685 "superblock": false, 00:11:41.685 "num_base_bdevs": 4, 00:11:41.685 "num_base_bdevs_discovered": 1, 00:11:41.685 "num_base_bdevs_operational": 4, 00:11:41.685 "base_bdevs_list": [ 00:11:41.685 { 00:11:41.685 "name": "BaseBdev1", 00:11:41.685 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:41.685 "is_configured": true, 00:11:41.685 "data_offset": 0, 00:11:41.685 "data_size": 65536 00:11:41.685 }, 00:11:41.685 { 00:11:41.685 "name": "BaseBdev2", 00:11:41.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.685 "is_configured": false, 00:11:41.685 "data_offset": 0, 00:11:41.685 "data_size": 0 00:11:41.685 }, 00:11:41.685 { 00:11:41.685 "name": "BaseBdev3", 00:11:41.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.685 "is_configured": false, 00:11:41.685 "data_offset": 0, 00:11:41.685 "data_size": 0 00:11:41.685 }, 00:11:41.685 { 00:11:41.685 "name": "BaseBdev4", 00:11:41.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.685 "is_configured": false, 00:11:41.685 "data_offset": 0, 00:11:41.685 "data_size": 0 00:11:41.685 } 00:11:41.685 ] 00:11:41.685 }' 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.685 10:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.944 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.944 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.944 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.202 [2024-11-20 10:34:45.427720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.202 BaseBdev2 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.202 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 [ 00:11:42.203 { 00:11:42.203 "name": "BaseBdev2", 00:11:42.203 "aliases": [ 00:11:42.203 "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc" 00:11:42.203 ], 00:11:42.203 "product_name": "Malloc disk", 00:11:42.203 "block_size": 512, 00:11:42.203 "num_blocks": 65536, 00:11:42.203 "uuid": "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc", 00:11:42.203 "assigned_rate_limits": { 00:11:42.203 "rw_ios_per_sec": 0, 00:11:42.203 "rw_mbytes_per_sec": 0, 00:11:42.203 "r_mbytes_per_sec": 0, 00:11:42.203 "w_mbytes_per_sec": 0 00:11:42.203 }, 00:11:42.203 "claimed": true, 00:11:42.203 "claim_type": "exclusive_write", 00:11:42.203 "zoned": false, 00:11:42.203 "supported_io_types": { 00:11:42.203 "read": true, 00:11:42.203 "write": true, 00:11:42.203 "unmap": true, 00:11:42.203 "flush": true, 00:11:42.203 "reset": true, 00:11:42.203 "nvme_admin": false, 00:11:42.203 "nvme_io": false, 00:11:42.203 "nvme_io_md": false, 00:11:42.203 "write_zeroes": true, 00:11:42.203 "zcopy": true, 00:11:42.203 "get_zone_info": false, 00:11:42.203 "zone_management": false, 00:11:42.203 "zone_append": false, 00:11:42.203 "compare": false, 00:11:42.203 "compare_and_write": false, 00:11:42.203 "abort": true, 00:11:42.203 "seek_hole": false, 00:11:42.203 "seek_data": false, 00:11:42.203 "copy": true, 00:11:42.203 "nvme_iov_md": false 00:11:42.203 }, 00:11:42.203 "memory_domains": [ 00:11:42.203 { 00:11:42.203 "dma_device_id": "system", 00:11:42.203 "dma_device_type": 1 00:11:42.203 }, 00:11:42.203 { 00:11:42.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.203 "dma_device_type": 2 00:11:42.203 } 00:11:42.203 ], 00:11:42.203 "driver_specific": {} 00:11:42.203 } 00:11:42.203 ] 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.203 "name": "Existed_Raid", 00:11:42.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.203 "strip_size_kb": 64, 00:11:42.203 "state": "configuring", 00:11:42.203 "raid_level": "concat", 00:11:42.203 "superblock": false, 00:11:42.203 "num_base_bdevs": 4, 00:11:42.203 "num_base_bdevs_discovered": 2, 00:11:42.203 "num_base_bdevs_operational": 4, 00:11:42.203 "base_bdevs_list": [ 00:11:42.203 { 00:11:42.203 "name": "BaseBdev1", 00:11:42.203 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:42.203 "is_configured": true, 00:11:42.203 "data_offset": 0, 00:11:42.203 "data_size": 65536 00:11:42.203 }, 00:11:42.203 { 00:11:42.203 "name": "BaseBdev2", 00:11:42.203 "uuid": "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc", 00:11:42.203 "is_configured": true, 00:11:42.203 "data_offset": 0, 00:11:42.203 "data_size": 65536 00:11:42.203 }, 00:11:42.203 { 00:11:42.203 "name": "BaseBdev3", 00:11:42.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.203 "is_configured": false, 00:11:42.203 "data_offset": 0, 00:11:42.203 "data_size": 0 00:11:42.203 }, 00:11:42.203 { 00:11:42.203 "name": "BaseBdev4", 00:11:42.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.203 "is_configured": false, 00:11:42.203 "data_offset": 0, 00:11:42.203 "data_size": 0 00:11:42.203 } 00:11:42.203 ] 00:11:42.203 }' 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.203 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.462 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.462 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.462 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.721 [2024-11-20 10:34:45.978445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.721 BaseBdev3 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.721 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.721 [ 00:11:42.721 { 00:11:42.721 "name": "BaseBdev3", 00:11:42.721 "aliases": [ 00:11:42.721 "33659581-5929-45a6-8a21-589f116ce3cb" 00:11:42.721 ], 00:11:42.721 "product_name": "Malloc disk", 00:11:42.721 "block_size": 512, 00:11:42.721 "num_blocks": 65536, 00:11:42.721 "uuid": "33659581-5929-45a6-8a21-589f116ce3cb", 00:11:42.721 "assigned_rate_limits": { 00:11:42.721 "rw_ios_per_sec": 0, 00:11:42.721 "rw_mbytes_per_sec": 0, 00:11:42.721 "r_mbytes_per_sec": 0, 00:11:42.721 "w_mbytes_per_sec": 0 00:11:42.721 }, 00:11:42.721 "claimed": true, 00:11:42.721 "claim_type": "exclusive_write", 00:11:42.721 "zoned": false, 00:11:42.721 "supported_io_types": { 00:11:42.721 "read": true, 00:11:42.721 "write": true, 00:11:42.721 "unmap": true, 00:11:42.721 "flush": true, 00:11:42.721 "reset": true, 00:11:42.721 "nvme_admin": false, 00:11:42.721 "nvme_io": false, 00:11:42.721 "nvme_io_md": false, 00:11:42.721 "write_zeroes": true, 00:11:42.721 "zcopy": true, 00:11:42.721 "get_zone_info": false, 00:11:42.721 "zone_management": false, 00:11:42.721 "zone_append": false, 00:11:42.721 "compare": false, 00:11:42.721 "compare_and_write": false, 00:11:42.721 "abort": true, 00:11:42.721 "seek_hole": false, 00:11:42.721 "seek_data": false, 00:11:42.721 "copy": true, 00:11:42.721 "nvme_iov_md": false 00:11:42.721 }, 00:11:42.721 "memory_domains": [ 00:11:42.721 { 00:11:42.721 "dma_device_id": "system", 00:11:42.721 "dma_device_type": 1 00:11:42.721 }, 00:11:42.721 { 00:11:42.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.721 "dma_device_type": 2 00:11:42.721 } 00:11:42.721 ], 00:11:42.721 "driver_specific": {} 00:11:42.721 } 00:11:42.721 ] 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.721 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.722 "name": "Existed_Raid", 00:11:42.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.722 "strip_size_kb": 64, 00:11:42.722 "state": "configuring", 00:11:42.722 "raid_level": "concat", 00:11:42.722 "superblock": false, 00:11:42.722 "num_base_bdevs": 4, 00:11:42.722 "num_base_bdevs_discovered": 3, 00:11:42.722 "num_base_bdevs_operational": 4, 00:11:42.722 "base_bdevs_list": [ 00:11:42.722 { 00:11:42.722 "name": "BaseBdev1", 00:11:42.722 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:42.722 "is_configured": true, 00:11:42.722 "data_offset": 0, 00:11:42.722 "data_size": 65536 00:11:42.722 }, 00:11:42.722 { 00:11:42.722 "name": "BaseBdev2", 00:11:42.722 "uuid": "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc", 00:11:42.722 "is_configured": true, 00:11:42.722 "data_offset": 0, 00:11:42.722 "data_size": 65536 00:11:42.722 }, 00:11:42.722 { 00:11:42.722 "name": "BaseBdev3", 00:11:42.722 "uuid": "33659581-5929-45a6-8a21-589f116ce3cb", 00:11:42.722 "is_configured": true, 00:11:42.722 "data_offset": 0, 00:11:42.722 "data_size": 65536 00:11:42.722 }, 00:11:42.722 { 00:11:42.722 "name": "BaseBdev4", 00:11:42.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.722 "is_configured": false, 00:11:42.722 "data_offset": 0, 00:11:42.722 "data_size": 0 00:11:42.722 } 00:11:42.722 ] 00:11:42.722 }' 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.722 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.980 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:42.980 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.980 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.239 [2024-11-20 10:34:46.479555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:43.239 [2024-11-20 10:34:46.479615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:43.239 [2024-11-20 10:34:46.479629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:43.239 [2024-11-20 10:34:46.479942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:43.239 [2024-11-20 10:34:46.480120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:43.239 [2024-11-20 10:34:46.480137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:43.239 [2024-11-20 10:34:46.480423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.239 BaseBdev4 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.239 [ 00:11:43.239 { 00:11:43.239 "name": "BaseBdev4", 00:11:43.239 "aliases": [ 00:11:43.239 "dfca130c-f3cf-4861-8e7c-496829e6d367" 00:11:43.239 ], 00:11:43.239 "product_name": "Malloc disk", 00:11:43.239 "block_size": 512, 00:11:43.239 "num_blocks": 65536, 00:11:43.239 "uuid": "dfca130c-f3cf-4861-8e7c-496829e6d367", 00:11:43.239 "assigned_rate_limits": { 00:11:43.239 "rw_ios_per_sec": 0, 00:11:43.239 "rw_mbytes_per_sec": 0, 00:11:43.239 "r_mbytes_per_sec": 0, 00:11:43.239 "w_mbytes_per_sec": 0 00:11:43.239 }, 00:11:43.239 "claimed": true, 00:11:43.239 "claim_type": "exclusive_write", 00:11:43.239 "zoned": false, 00:11:43.239 "supported_io_types": { 00:11:43.239 "read": true, 00:11:43.239 "write": true, 00:11:43.239 "unmap": true, 00:11:43.239 "flush": true, 00:11:43.239 "reset": true, 00:11:43.239 "nvme_admin": false, 00:11:43.239 "nvme_io": false, 00:11:43.239 "nvme_io_md": false, 00:11:43.239 "write_zeroes": true, 00:11:43.239 "zcopy": true, 00:11:43.239 "get_zone_info": false, 00:11:43.239 "zone_management": false, 00:11:43.239 "zone_append": false, 00:11:43.239 "compare": false, 00:11:43.239 "compare_and_write": false, 00:11:43.239 "abort": true, 00:11:43.239 "seek_hole": false, 00:11:43.239 "seek_data": false, 00:11:43.239 "copy": true, 00:11:43.239 "nvme_iov_md": false 00:11:43.239 }, 00:11:43.239 "memory_domains": [ 00:11:43.239 { 00:11:43.239 "dma_device_id": "system", 00:11:43.239 "dma_device_type": 1 00:11:43.239 }, 00:11:43.239 { 00:11:43.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.239 "dma_device_type": 2 00:11:43.239 } 00:11:43.239 ], 00:11:43.239 "driver_specific": {} 00:11:43.239 } 00:11:43.239 ] 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.239 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.239 "name": "Existed_Raid", 00:11:43.239 "uuid": "84c0573d-4d7b-4e95-b217-ed50d7e40218", 00:11:43.239 "strip_size_kb": 64, 00:11:43.240 "state": "online", 00:11:43.240 "raid_level": "concat", 00:11:43.240 "superblock": false, 00:11:43.240 "num_base_bdevs": 4, 00:11:43.240 "num_base_bdevs_discovered": 4, 00:11:43.240 "num_base_bdevs_operational": 4, 00:11:43.240 "base_bdevs_list": [ 00:11:43.240 { 00:11:43.240 "name": "BaseBdev1", 00:11:43.240 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:43.240 "is_configured": true, 00:11:43.240 "data_offset": 0, 00:11:43.240 "data_size": 65536 00:11:43.240 }, 00:11:43.240 { 00:11:43.240 "name": "BaseBdev2", 00:11:43.240 "uuid": "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc", 00:11:43.240 "is_configured": true, 00:11:43.240 "data_offset": 0, 00:11:43.240 "data_size": 65536 00:11:43.240 }, 00:11:43.240 { 00:11:43.240 "name": "BaseBdev3", 00:11:43.240 "uuid": "33659581-5929-45a6-8a21-589f116ce3cb", 00:11:43.240 "is_configured": true, 00:11:43.240 "data_offset": 0, 00:11:43.240 "data_size": 65536 00:11:43.240 }, 00:11:43.240 { 00:11:43.240 "name": "BaseBdev4", 00:11:43.240 "uuid": "dfca130c-f3cf-4861-8e7c-496829e6d367", 00:11:43.240 "is_configured": true, 00:11:43.240 "data_offset": 0, 00:11:43.240 "data_size": 65536 00:11:43.240 } 00:11:43.240 ] 00:11:43.240 }' 00:11:43.240 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.240 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.807 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.807 [2024-11-20 10:34:46.987118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.807 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.807 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.807 "name": "Existed_Raid", 00:11:43.807 "aliases": [ 00:11:43.807 "84c0573d-4d7b-4e95-b217-ed50d7e40218" 00:11:43.807 ], 00:11:43.807 "product_name": "Raid Volume", 00:11:43.807 "block_size": 512, 00:11:43.807 "num_blocks": 262144, 00:11:43.807 "uuid": "84c0573d-4d7b-4e95-b217-ed50d7e40218", 00:11:43.807 "assigned_rate_limits": { 00:11:43.807 "rw_ios_per_sec": 0, 00:11:43.807 "rw_mbytes_per_sec": 0, 00:11:43.807 "r_mbytes_per_sec": 0, 00:11:43.807 "w_mbytes_per_sec": 0 00:11:43.807 }, 00:11:43.807 "claimed": false, 00:11:43.807 "zoned": false, 00:11:43.807 "supported_io_types": { 00:11:43.807 "read": true, 00:11:43.807 "write": true, 00:11:43.807 "unmap": true, 00:11:43.807 "flush": true, 00:11:43.807 "reset": true, 00:11:43.807 "nvme_admin": false, 00:11:43.807 "nvme_io": false, 00:11:43.807 "nvme_io_md": false, 00:11:43.807 "write_zeroes": true, 00:11:43.807 "zcopy": false, 00:11:43.807 "get_zone_info": false, 00:11:43.807 "zone_management": false, 00:11:43.807 "zone_append": false, 00:11:43.807 "compare": false, 00:11:43.807 "compare_and_write": false, 00:11:43.807 "abort": false, 00:11:43.807 "seek_hole": false, 00:11:43.807 "seek_data": false, 00:11:43.807 "copy": false, 00:11:43.807 "nvme_iov_md": false 00:11:43.807 }, 00:11:43.807 "memory_domains": [ 00:11:43.807 { 00:11:43.807 "dma_device_id": "system", 00:11:43.807 "dma_device_type": 1 00:11:43.807 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.808 "dma_device_type": 2 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "system", 00:11:43.808 "dma_device_type": 1 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.808 "dma_device_type": 2 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "system", 00:11:43.808 "dma_device_type": 1 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.808 "dma_device_type": 2 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "system", 00:11:43.808 "dma_device_type": 1 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.808 "dma_device_type": 2 00:11:43.808 } 00:11:43.808 ], 00:11:43.808 "driver_specific": { 00:11:43.808 "raid": { 00:11:43.808 "uuid": "84c0573d-4d7b-4e95-b217-ed50d7e40218", 00:11:43.808 "strip_size_kb": 64, 00:11:43.808 "state": "online", 00:11:43.808 "raid_level": "concat", 00:11:43.808 "superblock": false, 00:11:43.808 "num_base_bdevs": 4, 00:11:43.808 "num_base_bdevs_discovered": 4, 00:11:43.808 "num_base_bdevs_operational": 4, 00:11:43.808 "base_bdevs_list": [ 00:11:43.808 { 00:11:43.808 "name": "BaseBdev1", 00:11:43.808 "uuid": "0291bdbb-b6a2-4be6-ae54-41c7d6b5e73e", 00:11:43.808 "is_configured": true, 00:11:43.808 "data_offset": 0, 00:11:43.808 "data_size": 65536 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "name": "BaseBdev2", 00:11:43.808 "uuid": "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc", 00:11:43.808 "is_configured": true, 00:11:43.808 "data_offset": 0, 00:11:43.808 "data_size": 65536 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "name": "BaseBdev3", 00:11:43.808 "uuid": "33659581-5929-45a6-8a21-589f116ce3cb", 00:11:43.808 "is_configured": true, 00:11:43.808 "data_offset": 0, 00:11:43.808 "data_size": 65536 00:11:43.808 }, 00:11:43.808 { 00:11:43.808 "name": "BaseBdev4", 00:11:43.808 "uuid": "dfca130c-f3cf-4861-8e7c-496829e6d367", 00:11:43.808 "is_configured": true, 00:11:43.808 "data_offset": 0, 00:11:43.808 "data_size": 65536 00:11:43.808 } 00:11:43.808 ] 00:11:43.808 } 00:11:43.808 } 00:11:43.808 }' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:43.808 BaseBdev2 00:11:43.808 BaseBdev3 00:11:43.808 BaseBdev4' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.808 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.067 [2024-11-20 10:34:47.342227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:44.067 [2024-11-20 10:34:47.342261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.067 [2024-11-20 10:34:47.342315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.067 "name": "Existed_Raid", 00:11:44.067 "uuid": "84c0573d-4d7b-4e95-b217-ed50d7e40218", 00:11:44.067 "strip_size_kb": 64, 00:11:44.067 "state": "offline", 00:11:44.067 "raid_level": "concat", 00:11:44.067 "superblock": false, 00:11:44.067 "num_base_bdevs": 4, 00:11:44.067 "num_base_bdevs_discovered": 3, 00:11:44.067 "num_base_bdevs_operational": 3, 00:11:44.067 "base_bdevs_list": [ 00:11:44.067 { 00:11:44.067 "name": null, 00:11:44.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.067 "is_configured": false, 00:11:44.067 "data_offset": 0, 00:11:44.067 "data_size": 65536 00:11:44.067 }, 00:11:44.067 { 00:11:44.067 "name": "BaseBdev2", 00:11:44.067 "uuid": "73c2d69b-c6f7-4cd2-bb13-8a02f6df3dcc", 00:11:44.067 "is_configured": true, 00:11:44.067 "data_offset": 0, 00:11:44.067 "data_size": 65536 00:11:44.067 }, 00:11:44.067 { 00:11:44.067 "name": "BaseBdev3", 00:11:44.067 "uuid": "33659581-5929-45a6-8a21-589f116ce3cb", 00:11:44.067 "is_configured": true, 00:11:44.067 "data_offset": 0, 00:11:44.067 "data_size": 65536 00:11:44.067 }, 00:11:44.067 { 00:11:44.067 "name": "BaseBdev4", 00:11:44.067 "uuid": "dfca130c-f3cf-4861-8e7c-496829e6d367", 00:11:44.067 "is_configured": true, 00:11:44.067 "data_offset": 0, 00:11:44.067 "data_size": 65536 00:11:44.067 } 00:11:44.067 ] 00:11:44.067 }' 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.067 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.633 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 [2024-11-20 10:34:47.956382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.633 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.892 [2024-11-20 10:34:48.116524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.892 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.892 [2024-11-20 10:34:48.276651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:44.892 [2024-11-20 10:34:48.276761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.153 BaseBdev2 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.153 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.153 [ 00:11:45.153 { 00:11:45.153 "name": "BaseBdev2", 00:11:45.153 "aliases": [ 00:11:45.153 "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60" 00:11:45.153 ], 00:11:45.153 "product_name": "Malloc disk", 00:11:45.153 "block_size": 512, 00:11:45.153 "num_blocks": 65536, 00:11:45.153 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:45.153 "assigned_rate_limits": { 00:11:45.153 "rw_ios_per_sec": 0, 00:11:45.153 "rw_mbytes_per_sec": 0, 00:11:45.153 "r_mbytes_per_sec": 0, 00:11:45.153 "w_mbytes_per_sec": 0 00:11:45.153 }, 00:11:45.153 "claimed": false, 00:11:45.153 "zoned": false, 00:11:45.153 "supported_io_types": { 00:11:45.153 "read": true, 00:11:45.153 "write": true, 00:11:45.153 "unmap": true, 00:11:45.153 "flush": true, 00:11:45.153 "reset": true, 00:11:45.153 "nvme_admin": false, 00:11:45.153 "nvme_io": false, 00:11:45.153 "nvme_io_md": false, 00:11:45.153 "write_zeroes": true, 00:11:45.153 "zcopy": true, 00:11:45.153 "get_zone_info": false, 00:11:45.153 "zone_management": false, 00:11:45.153 "zone_append": false, 00:11:45.153 "compare": false, 00:11:45.154 "compare_and_write": false, 00:11:45.154 "abort": true, 00:11:45.154 "seek_hole": false, 00:11:45.154 "seek_data": false, 00:11:45.154 "copy": true, 00:11:45.154 "nvme_iov_md": false 00:11:45.154 }, 00:11:45.154 "memory_domains": [ 00:11:45.154 { 00:11:45.154 "dma_device_id": "system", 00:11:45.154 "dma_device_type": 1 00:11:45.154 }, 00:11:45.154 { 00:11:45.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.154 "dma_device_type": 2 00:11:45.154 } 00:11:45.154 ], 00:11:45.154 "driver_specific": {} 00:11:45.154 } 00:11:45.154 ] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.154 BaseBdev3 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.154 [ 00:11:45.154 { 00:11:45.154 "name": "BaseBdev3", 00:11:45.154 "aliases": [ 00:11:45.154 "8a4db552-f456-4b24-bfef-2d3a9a0e51a5" 00:11:45.154 ], 00:11:45.154 "product_name": "Malloc disk", 00:11:45.154 "block_size": 512, 00:11:45.154 "num_blocks": 65536, 00:11:45.154 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:45.154 "assigned_rate_limits": { 00:11:45.154 "rw_ios_per_sec": 0, 00:11:45.154 "rw_mbytes_per_sec": 0, 00:11:45.154 "r_mbytes_per_sec": 0, 00:11:45.154 "w_mbytes_per_sec": 0 00:11:45.154 }, 00:11:45.154 "claimed": false, 00:11:45.154 "zoned": false, 00:11:45.154 "supported_io_types": { 00:11:45.154 "read": true, 00:11:45.154 "write": true, 00:11:45.154 "unmap": true, 00:11:45.154 "flush": true, 00:11:45.154 "reset": true, 00:11:45.154 "nvme_admin": false, 00:11:45.154 "nvme_io": false, 00:11:45.154 "nvme_io_md": false, 00:11:45.154 "write_zeroes": true, 00:11:45.154 "zcopy": true, 00:11:45.154 "get_zone_info": false, 00:11:45.154 "zone_management": false, 00:11:45.154 "zone_append": false, 00:11:45.154 "compare": false, 00:11:45.154 "compare_and_write": false, 00:11:45.154 "abort": true, 00:11:45.154 "seek_hole": false, 00:11:45.154 "seek_data": false, 00:11:45.154 "copy": true, 00:11:45.154 "nvme_iov_md": false 00:11:45.154 }, 00:11:45.154 "memory_domains": [ 00:11:45.154 { 00:11:45.154 "dma_device_id": "system", 00:11:45.154 "dma_device_type": 1 00:11:45.154 }, 00:11:45.154 { 00:11:45.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.154 "dma_device_type": 2 00:11:45.154 } 00:11:45.154 ], 00:11:45.154 "driver_specific": {} 00:11:45.154 } 00:11:45.154 ] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.154 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.414 BaseBdev4 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.414 [ 00:11:45.414 { 00:11:45.414 "name": "BaseBdev4", 00:11:45.414 "aliases": [ 00:11:45.414 "763c50a5-8f54-47d2-9163-a8992074bb23" 00:11:45.414 ], 00:11:45.414 "product_name": "Malloc disk", 00:11:45.414 "block_size": 512, 00:11:45.414 "num_blocks": 65536, 00:11:45.414 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:45.414 "assigned_rate_limits": { 00:11:45.414 "rw_ios_per_sec": 0, 00:11:45.414 "rw_mbytes_per_sec": 0, 00:11:45.414 "r_mbytes_per_sec": 0, 00:11:45.414 "w_mbytes_per_sec": 0 00:11:45.414 }, 00:11:45.414 "claimed": false, 00:11:45.414 "zoned": false, 00:11:45.414 "supported_io_types": { 00:11:45.414 "read": true, 00:11:45.414 "write": true, 00:11:45.414 "unmap": true, 00:11:45.414 "flush": true, 00:11:45.414 "reset": true, 00:11:45.414 "nvme_admin": false, 00:11:45.414 "nvme_io": false, 00:11:45.414 "nvme_io_md": false, 00:11:45.414 "write_zeroes": true, 00:11:45.414 "zcopy": true, 00:11:45.414 "get_zone_info": false, 00:11:45.414 "zone_management": false, 00:11:45.414 "zone_append": false, 00:11:45.414 "compare": false, 00:11:45.414 "compare_and_write": false, 00:11:45.414 "abort": true, 00:11:45.414 "seek_hole": false, 00:11:45.414 "seek_data": false, 00:11:45.414 "copy": true, 00:11:45.414 "nvme_iov_md": false 00:11:45.414 }, 00:11:45.414 "memory_domains": [ 00:11:45.414 { 00:11:45.414 "dma_device_id": "system", 00:11:45.414 "dma_device_type": 1 00:11:45.414 }, 00:11:45.414 { 00:11:45.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.414 "dma_device_type": 2 00:11:45.414 } 00:11:45.414 ], 00:11:45.414 "driver_specific": {} 00:11:45.414 } 00:11:45.414 ] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.414 [2024-11-20 10:34:48.679471] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.414 [2024-11-20 10:34:48.679557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.414 [2024-11-20 10:34:48.679602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.414 [2024-11-20 10:34:48.681487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.414 [2024-11-20 10:34:48.681598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.414 "name": "Existed_Raid", 00:11:45.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.414 "strip_size_kb": 64, 00:11:45.414 "state": "configuring", 00:11:45.414 "raid_level": "concat", 00:11:45.414 "superblock": false, 00:11:45.414 "num_base_bdevs": 4, 00:11:45.414 "num_base_bdevs_discovered": 3, 00:11:45.414 "num_base_bdevs_operational": 4, 00:11:45.414 "base_bdevs_list": [ 00:11:45.414 { 00:11:45.414 "name": "BaseBdev1", 00:11:45.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.414 "is_configured": false, 00:11:45.414 "data_offset": 0, 00:11:45.414 "data_size": 0 00:11:45.414 }, 00:11:45.414 { 00:11:45.414 "name": "BaseBdev2", 00:11:45.414 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:45.414 "is_configured": true, 00:11:45.414 "data_offset": 0, 00:11:45.414 "data_size": 65536 00:11:45.414 }, 00:11:45.414 { 00:11:45.414 "name": "BaseBdev3", 00:11:45.414 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:45.414 "is_configured": true, 00:11:45.414 "data_offset": 0, 00:11:45.414 "data_size": 65536 00:11:45.414 }, 00:11:45.414 { 00:11:45.414 "name": "BaseBdev4", 00:11:45.414 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:45.414 "is_configured": true, 00:11:45.414 "data_offset": 0, 00:11:45.414 "data_size": 65536 00:11:45.414 } 00:11:45.414 ] 00:11:45.414 }' 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.414 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.674 [2024-11-20 10:34:49.118779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.674 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.934 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.934 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.934 "name": "Existed_Raid", 00:11:45.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.934 "strip_size_kb": 64, 00:11:45.934 "state": "configuring", 00:11:45.934 "raid_level": "concat", 00:11:45.934 "superblock": false, 00:11:45.934 "num_base_bdevs": 4, 00:11:45.934 "num_base_bdevs_discovered": 2, 00:11:45.934 "num_base_bdevs_operational": 4, 00:11:45.934 "base_bdevs_list": [ 00:11:45.934 { 00:11:45.934 "name": "BaseBdev1", 00:11:45.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.934 "is_configured": false, 00:11:45.934 "data_offset": 0, 00:11:45.934 "data_size": 0 00:11:45.934 }, 00:11:45.934 { 00:11:45.934 "name": null, 00:11:45.934 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:45.934 "is_configured": false, 00:11:45.934 "data_offset": 0, 00:11:45.934 "data_size": 65536 00:11:45.934 }, 00:11:45.934 { 00:11:45.934 "name": "BaseBdev3", 00:11:45.934 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:45.934 "is_configured": true, 00:11:45.934 "data_offset": 0, 00:11:45.934 "data_size": 65536 00:11:45.934 }, 00:11:45.934 { 00:11:45.934 "name": "BaseBdev4", 00:11:45.934 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:45.934 "is_configured": true, 00:11:45.934 "data_offset": 0, 00:11:45.934 "data_size": 65536 00:11:45.934 } 00:11:45.934 ] 00:11:45.934 }' 00:11:45.934 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.934 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.194 [2024-11-20 10:34:49.652795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.194 BaseBdev1 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.194 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.453 [ 00:11:46.453 { 00:11:46.453 "name": "BaseBdev1", 00:11:46.453 "aliases": [ 00:11:46.453 "2a2c9386-9596-430a-b4ac-20b45a7977a4" 00:11:46.453 ], 00:11:46.453 "product_name": "Malloc disk", 00:11:46.453 "block_size": 512, 00:11:46.453 "num_blocks": 65536, 00:11:46.453 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:46.453 "assigned_rate_limits": { 00:11:46.453 "rw_ios_per_sec": 0, 00:11:46.453 "rw_mbytes_per_sec": 0, 00:11:46.453 "r_mbytes_per_sec": 0, 00:11:46.453 "w_mbytes_per_sec": 0 00:11:46.453 }, 00:11:46.453 "claimed": true, 00:11:46.453 "claim_type": "exclusive_write", 00:11:46.453 "zoned": false, 00:11:46.453 "supported_io_types": { 00:11:46.453 "read": true, 00:11:46.453 "write": true, 00:11:46.453 "unmap": true, 00:11:46.453 "flush": true, 00:11:46.453 "reset": true, 00:11:46.453 "nvme_admin": false, 00:11:46.453 "nvme_io": false, 00:11:46.453 "nvme_io_md": false, 00:11:46.453 "write_zeroes": true, 00:11:46.453 "zcopy": true, 00:11:46.453 "get_zone_info": false, 00:11:46.453 "zone_management": false, 00:11:46.453 "zone_append": false, 00:11:46.453 "compare": false, 00:11:46.453 "compare_and_write": false, 00:11:46.453 "abort": true, 00:11:46.453 "seek_hole": false, 00:11:46.453 "seek_data": false, 00:11:46.453 "copy": true, 00:11:46.453 "nvme_iov_md": false 00:11:46.453 }, 00:11:46.453 "memory_domains": [ 00:11:46.453 { 00:11:46.453 "dma_device_id": "system", 00:11:46.453 "dma_device_type": 1 00:11:46.453 }, 00:11:46.453 { 00:11:46.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.453 "dma_device_type": 2 00:11:46.453 } 00:11:46.453 ], 00:11:46.453 "driver_specific": {} 00:11:46.453 } 00:11:46.453 ] 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.453 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.454 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.454 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.454 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.454 "name": "Existed_Raid", 00:11:46.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.454 "strip_size_kb": 64, 00:11:46.454 "state": "configuring", 00:11:46.454 "raid_level": "concat", 00:11:46.454 "superblock": false, 00:11:46.454 "num_base_bdevs": 4, 00:11:46.454 "num_base_bdevs_discovered": 3, 00:11:46.454 "num_base_bdevs_operational": 4, 00:11:46.454 "base_bdevs_list": [ 00:11:46.454 { 00:11:46.454 "name": "BaseBdev1", 00:11:46.454 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:46.454 "is_configured": true, 00:11:46.454 "data_offset": 0, 00:11:46.454 "data_size": 65536 00:11:46.454 }, 00:11:46.454 { 00:11:46.454 "name": null, 00:11:46.454 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:46.454 "is_configured": false, 00:11:46.454 "data_offset": 0, 00:11:46.454 "data_size": 65536 00:11:46.454 }, 00:11:46.454 { 00:11:46.454 "name": "BaseBdev3", 00:11:46.454 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:46.454 "is_configured": true, 00:11:46.454 "data_offset": 0, 00:11:46.454 "data_size": 65536 00:11:46.454 }, 00:11:46.454 { 00:11:46.454 "name": "BaseBdev4", 00:11:46.454 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:46.454 "is_configured": true, 00:11:46.454 "data_offset": 0, 00:11:46.454 "data_size": 65536 00:11:46.454 } 00:11:46.454 ] 00:11:46.454 }' 00:11:46.454 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.454 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.713 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.713 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.713 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.713 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.713 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.714 [2024-11-20 10:34:50.160071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.714 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.973 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.973 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.973 "name": "Existed_Raid", 00:11:46.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.973 "strip_size_kb": 64, 00:11:46.973 "state": "configuring", 00:11:46.973 "raid_level": "concat", 00:11:46.973 "superblock": false, 00:11:46.973 "num_base_bdevs": 4, 00:11:46.973 "num_base_bdevs_discovered": 2, 00:11:46.973 "num_base_bdevs_operational": 4, 00:11:46.973 "base_bdevs_list": [ 00:11:46.973 { 00:11:46.973 "name": "BaseBdev1", 00:11:46.973 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:46.973 "is_configured": true, 00:11:46.973 "data_offset": 0, 00:11:46.973 "data_size": 65536 00:11:46.973 }, 00:11:46.973 { 00:11:46.973 "name": null, 00:11:46.973 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:46.973 "is_configured": false, 00:11:46.973 "data_offset": 0, 00:11:46.973 "data_size": 65536 00:11:46.973 }, 00:11:46.973 { 00:11:46.973 "name": null, 00:11:46.973 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:46.973 "is_configured": false, 00:11:46.973 "data_offset": 0, 00:11:46.973 "data_size": 65536 00:11:46.973 }, 00:11:46.973 { 00:11:46.973 "name": "BaseBdev4", 00:11:46.973 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:46.973 "is_configured": true, 00:11:46.973 "data_offset": 0, 00:11:46.973 "data_size": 65536 00:11:46.973 } 00:11:46.973 ] 00:11:46.973 }' 00:11:46.973 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.973 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.231 [2024-11-20 10:34:50.671209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.231 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.232 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.232 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.232 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.232 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.232 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.232 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.491 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.491 "name": "Existed_Raid", 00:11:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.491 "strip_size_kb": 64, 00:11:47.491 "state": "configuring", 00:11:47.491 "raid_level": "concat", 00:11:47.491 "superblock": false, 00:11:47.491 "num_base_bdevs": 4, 00:11:47.491 "num_base_bdevs_discovered": 3, 00:11:47.491 "num_base_bdevs_operational": 4, 00:11:47.491 "base_bdevs_list": [ 00:11:47.491 { 00:11:47.491 "name": "BaseBdev1", 00:11:47.491 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:47.491 "is_configured": true, 00:11:47.491 "data_offset": 0, 00:11:47.491 "data_size": 65536 00:11:47.491 }, 00:11:47.491 { 00:11:47.491 "name": null, 00:11:47.491 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:47.491 "is_configured": false, 00:11:47.491 "data_offset": 0, 00:11:47.491 "data_size": 65536 00:11:47.491 }, 00:11:47.491 { 00:11:47.491 "name": "BaseBdev3", 00:11:47.491 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:47.491 "is_configured": true, 00:11:47.491 "data_offset": 0, 00:11:47.491 "data_size": 65536 00:11:47.491 }, 00:11:47.491 { 00:11:47.491 "name": "BaseBdev4", 00:11:47.491 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:47.491 "is_configured": true, 00:11:47.491 "data_offset": 0, 00:11:47.491 "data_size": 65536 00:11:47.491 } 00:11:47.491 ] 00:11:47.491 }' 00:11:47.491 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.491 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.751 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.751 [2024-11-20 10:34:51.138484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.010 "name": "Existed_Raid", 00:11:48.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.010 "strip_size_kb": 64, 00:11:48.010 "state": "configuring", 00:11:48.010 "raid_level": "concat", 00:11:48.010 "superblock": false, 00:11:48.010 "num_base_bdevs": 4, 00:11:48.010 "num_base_bdevs_discovered": 2, 00:11:48.010 "num_base_bdevs_operational": 4, 00:11:48.010 "base_bdevs_list": [ 00:11:48.010 { 00:11:48.010 "name": null, 00:11:48.010 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:48.010 "is_configured": false, 00:11:48.010 "data_offset": 0, 00:11:48.010 "data_size": 65536 00:11:48.010 }, 00:11:48.010 { 00:11:48.010 "name": null, 00:11:48.010 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:48.010 "is_configured": false, 00:11:48.010 "data_offset": 0, 00:11:48.010 "data_size": 65536 00:11:48.010 }, 00:11:48.010 { 00:11:48.010 "name": "BaseBdev3", 00:11:48.010 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:48.010 "is_configured": true, 00:11:48.010 "data_offset": 0, 00:11:48.010 "data_size": 65536 00:11:48.010 }, 00:11:48.010 { 00:11:48.010 "name": "BaseBdev4", 00:11:48.010 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:48.010 "is_configured": true, 00:11:48.010 "data_offset": 0, 00:11:48.010 "data_size": 65536 00:11:48.010 } 00:11:48.010 ] 00:11:48.010 }' 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.010 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.283 [2024-11-20 10:34:51.729174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.283 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.546 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.546 "name": "Existed_Raid", 00:11:48.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.546 "strip_size_kb": 64, 00:11:48.546 "state": "configuring", 00:11:48.546 "raid_level": "concat", 00:11:48.546 "superblock": false, 00:11:48.546 "num_base_bdevs": 4, 00:11:48.546 "num_base_bdevs_discovered": 3, 00:11:48.546 "num_base_bdevs_operational": 4, 00:11:48.546 "base_bdevs_list": [ 00:11:48.546 { 00:11:48.546 "name": null, 00:11:48.546 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:48.546 "is_configured": false, 00:11:48.546 "data_offset": 0, 00:11:48.546 "data_size": 65536 00:11:48.546 }, 00:11:48.546 { 00:11:48.546 "name": "BaseBdev2", 00:11:48.546 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:48.546 "is_configured": true, 00:11:48.546 "data_offset": 0, 00:11:48.546 "data_size": 65536 00:11:48.546 }, 00:11:48.546 { 00:11:48.546 "name": "BaseBdev3", 00:11:48.546 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:48.546 "is_configured": true, 00:11:48.546 "data_offset": 0, 00:11:48.546 "data_size": 65536 00:11:48.546 }, 00:11:48.546 { 00:11:48.546 "name": "BaseBdev4", 00:11:48.546 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:48.546 "is_configured": true, 00:11:48.546 "data_offset": 0, 00:11:48.546 "data_size": 65536 00:11:48.546 } 00:11:48.546 ] 00:11:48.546 }' 00:11:48.546 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.546 10:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:48.805 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a2c9386-9596-430a-b4ac-20b45a7977a4 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.064 [2024-11-20 10:34:52.353253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:49.064 [2024-11-20 10:34:52.353425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:49.064 [2024-11-20 10:34:52.353457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:49.064 [2024-11-20 10:34:52.353768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:49.064 [2024-11-20 10:34:52.353945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:49.064 [2024-11-20 10:34:52.353960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:49.064 [2024-11-20 10:34:52.354261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.064 NewBaseBdev 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.064 [ 00:11:49.064 { 00:11:49.064 "name": "NewBaseBdev", 00:11:49.064 "aliases": [ 00:11:49.064 "2a2c9386-9596-430a-b4ac-20b45a7977a4" 00:11:49.064 ], 00:11:49.064 "product_name": "Malloc disk", 00:11:49.064 "block_size": 512, 00:11:49.064 "num_blocks": 65536, 00:11:49.064 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:49.064 "assigned_rate_limits": { 00:11:49.064 "rw_ios_per_sec": 0, 00:11:49.064 "rw_mbytes_per_sec": 0, 00:11:49.064 "r_mbytes_per_sec": 0, 00:11:49.064 "w_mbytes_per_sec": 0 00:11:49.064 }, 00:11:49.064 "claimed": true, 00:11:49.064 "claim_type": "exclusive_write", 00:11:49.064 "zoned": false, 00:11:49.064 "supported_io_types": { 00:11:49.064 "read": true, 00:11:49.064 "write": true, 00:11:49.064 "unmap": true, 00:11:49.064 "flush": true, 00:11:49.064 "reset": true, 00:11:49.064 "nvme_admin": false, 00:11:49.064 "nvme_io": false, 00:11:49.064 "nvme_io_md": false, 00:11:49.064 "write_zeroes": true, 00:11:49.064 "zcopy": true, 00:11:49.064 "get_zone_info": false, 00:11:49.064 "zone_management": false, 00:11:49.064 "zone_append": false, 00:11:49.064 "compare": false, 00:11:49.064 "compare_and_write": false, 00:11:49.064 "abort": true, 00:11:49.064 "seek_hole": false, 00:11:49.064 "seek_data": false, 00:11:49.064 "copy": true, 00:11:49.064 "nvme_iov_md": false 00:11:49.064 }, 00:11:49.064 "memory_domains": [ 00:11:49.064 { 00:11:49.064 "dma_device_id": "system", 00:11:49.064 "dma_device_type": 1 00:11:49.064 }, 00:11:49.064 { 00:11:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.064 "dma_device_type": 2 00:11:49.064 } 00:11:49.064 ], 00:11:49.064 "driver_specific": {} 00:11:49.064 } 00:11:49.064 ] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.064 "name": "Existed_Raid", 00:11:49.064 "uuid": "96aed2b8-b284-4ea3-94b2-9e18b5fb81e9", 00:11:49.064 "strip_size_kb": 64, 00:11:49.064 "state": "online", 00:11:49.064 "raid_level": "concat", 00:11:49.064 "superblock": false, 00:11:49.064 "num_base_bdevs": 4, 00:11:49.064 "num_base_bdevs_discovered": 4, 00:11:49.064 "num_base_bdevs_operational": 4, 00:11:49.064 "base_bdevs_list": [ 00:11:49.064 { 00:11:49.064 "name": "NewBaseBdev", 00:11:49.064 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:49.064 "is_configured": true, 00:11:49.064 "data_offset": 0, 00:11:49.064 "data_size": 65536 00:11:49.064 }, 00:11:49.064 { 00:11:49.064 "name": "BaseBdev2", 00:11:49.064 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:49.064 "is_configured": true, 00:11:49.064 "data_offset": 0, 00:11:49.064 "data_size": 65536 00:11:49.064 }, 00:11:49.064 { 00:11:49.064 "name": "BaseBdev3", 00:11:49.064 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:49.064 "is_configured": true, 00:11:49.064 "data_offset": 0, 00:11:49.064 "data_size": 65536 00:11:49.064 }, 00:11:49.064 { 00:11:49.064 "name": "BaseBdev4", 00:11:49.064 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:49.064 "is_configured": true, 00:11:49.064 "data_offset": 0, 00:11:49.064 "data_size": 65536 00:11:49.064 } 00:11:49.064 ] 00:11:49.064 }' 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.064 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.632 [2024-11-20 10:34:52.856868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.632 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.632 "name": "Existed_Raid", 00:11:49.632 "aliases": [ 00:11:49.632 "96aed2b8-b284-4ea3-94b2-9e18b5fb81e9" 00:11:49.632 ], 00:11:49.632 "product_name": "Raid Volume", 00:11:49.632 "block_size": 512, 00:11:49.632 "num_blocks": 262144, 00:11:49.632 "uuid": "96aed2b8-b284-4ea3-94b2-9e18b5fb81e9", 00:11:49.632 "assigned_rate_limits": { 00:11:49.632 "rw_ios_per_sec": 0, 00:11:49.632 "rw_mbytes_per_sec": 0, 00:11:49.632 "r_mbytes_per_sec": 0, 00:11:49.632 "w_mbytes_per_sec": 0 00:11:49.632 }, 00:11:49.632 "claimed": false, 00:11:49.632 "zoned": false, 00:11:49.632 "supported_io_types": { 00:11:49.632 "read": true, 00:11:49.632 "write": true, 00:11:49.632 "unmap": true, 00:11:49.632 "flush": true, 00:11:49.632 "reset": true, 00:11:49.632 "nvme_admin": false, 00:11:49.632 "nvme_io": false, 00:11:49.632 "nvme_io_md": false, 00:11:49.632 "write_zeroes": true, 00:11:49.632 "zcopy": false, 00:11:49.632 "get_zone_info": false, 00:11:49.632 "zone_management": false, 00:11:49.632 "zone_append": false, 00:11:49.632 "compare": false, 00:11:49.632 "compare_and_write": false, 00:11:49.632 "abort": false, 00:11:49.632 "seek_hole": false, 00:11:49.632 "seek_data": false, 00:11:49.632 "copy": false, 00:11:49.632 "nvme_iov_md": false 00:11:49.632 }, 00:11:49.632 "memory_domains": [ 00:11:49.632 { 00:11:49.632 "dma_device_id": "system", 00:11:49.632 "dma_device_type": 1 00:11:49.632 }, 00:11:49.632 { 00:11:49.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.632 "dma_device_type": 2 00:11:49.632 }, 00:11:49.632 { 00:11:49.632 "dma_device_id": "system", 00:11:49.633 "dma_device_type": 1 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.633 "dma_device_type": 2 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "dma_device_id": "system", 00:11:49.633 "dma_device_type": 1 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.633 "dma_device_type": 2 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "dma_device_id": "system", 00:11:49.633 "dma_device_type": 1 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.633 "dma_device_type": 2 00:11:49.633 } 00:11:49.633 ], 00:11:49.633 "driver_specific": { 00:11:49.633 "raid": { 00:11:49.633 "uuid": "96aed2b8-b284-4ea3-94b2-9e18b5fb81e9", 00:11:49.633 "strip_size_kb": 64, 00:11:49.633 "state": "online", 00:11:49.633 "raid_level": "concat", 00:11:49.633 "superblock": false, 00:11:49.633 "num_base_bdevs": 4, 00:11:49.633 "num_base_bdevs_discovered": 4, 00:11:49.633 "num_base_bdevs_operational": 4, 00:11:49.633 "base_bdevs_list": [ 00:11:49.633 { 00:11:49.633 "name": "NewBaseBdev", 00:11:49.633 "uuid": "2a2c9386-9596-430a-b4ac-20b45a7977a4", 00:11:49.633 "is_configured": true, 00:11:49.633 "data_offset": 0, 00:11:49.633 "data_size": 65536 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "name": "BaseBdev2", 00:11:49.633 "uuid": "5fd5f6d3-4a1f-44f2-8d16-df39c7ae5e60", 00:11:49.633 "is_configured": true, 00:11:49.633 "data_offset": 0, 00:11:49.633 "data_size": 65536 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "name": "BaseBdev3", 00:11:49.633 "uuid": "8a4db552-f456-4b24-bfef-2d3a9a0e51a5", 00:11:49.633 "is_configured": true, 00:11:49.633 "data_offset": 0, 00:11:49.633 "data_size": 65536 00:11:49.633 }, 00:11:49.633 { 00:11:49.633 "name": "BaseBdev4", 00:11:49.633 "uuid": "763c50a5-8f54-47d2-9163-a8992074bb23", 00:11:49.633 "is_configured": true, 00:11:49.633 "data_offset": 0, 00:11:49.633 "data_size": 65536 00:11:49.633 } 00:11:49.633 ] 00:11:49.633 } 00:11:49.633 } 00:11:49.633 }' 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:49.633 BaseBdev2 00:11:49.633 BaseBdev3 00:11:49.633 BaseBdev4' 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.633 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.633 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 [2024-11-20 10:34:53.151956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.892 [2024-11-20 10:34:53.152042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.892 [2024-11-20 10:34:53.152150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.892 [2024-11-20 10:34:53.152245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.892 [2024-11-20 10:34:53.152294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71465 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71465 ']' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71465 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71465 00:11:49.892 killing process with pid 71465 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71465' 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71465 00:11:49.892 [2024-11-20 10:34:53.181763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.892 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71465 00:11:50.150 [2024-11-20 10:34:53.596311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:51.529 00:11:51.529 real 0m11.762s 00:11:51.529 user 0m18.635s 00:11:51.529 sys 0m2.041s 00:11:51.529 ************************************ 00:11:51.529 END TEST raid_state_function_test 00:11:51.529 ************************************ 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.529 10:34:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:51.529 10:34:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:51.529 10:34:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.529 10:34:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.529 ************************************ 00:11:51.529 START TEST raid_state_function_test_sb 00:11:51.529 ************************************ 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:51.529 Process raid pid: 72142 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72142 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72142' 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72142 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72142 ']' 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.529 10:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.529 [2024-11-20 10:34:54.944929] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:51.529 [2024-11-20 10:34:54.945142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.789 [2024-11-20 10:34:55.100901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.789 [2024-11-20 10:34:55.225574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.081 [2024-11-20 10:34:55.442066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.081 [2024-11-20 10:34:55.442205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.363 [2024-11-20 10:34:55.817098] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.363 [2024-11-20 10:34:55.817208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.363 [2024-11-20 10:34:55.817246] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.363 [2024-11-20 10:34:55.817282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.363 [2024-11-20 10:34:55.817325] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.363 [2024-11-20 10:34:55.817360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.363 [2024-11-20 10:34:55.817420] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.363 [2024-11-20 10:34:55.817446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.363 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.623 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.623 "name": "Existed_Raid", 00:11:52.623 "uuid": "f8f230ee-3e46-437d-8050-a19bd3994fda", 00:11:52.623 "strip_size_kb": 64, 00:11:52.623 "state": "configuring", 00:11:52.623 "raid_level": "concat", 00:11:52.623 "superblock": true, 00:11:52.623 "num_base_bdevs": 4, 00:11:52.623 "num_base_bdevs_discovered": 0, 00:11:52.623 "num_base_bdevs_operational": 4, 00:11:52.623 "base_bdevs_list": [ 00:11:52.623 { 00:11:52.623 "name": "BaseBdev1", 00:11:52.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.623 "is_configured": false, 00:11:52.623 "data_offset": 0, 00:11:52.623 "data_size": 0 00:11:52.623 }, 00:11:52.623 { 00:11:52.623 "name": "BaseBdev2", 00:11:52.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.623 "is_configured": false, 00:11:52.623 "data_offset": 0, 00:11:52.623 "data_size": 0 00:11:52.623 }, 00:11:52.623 { 00:11:52.623 "name": "BaseBdev3", 00:11:52.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.623 "is_configured": false, 00:11:52.623 "data_offset": 0, 00:11:52.623 "data_size": 0 00:11:52.623 }, 00:11:52.623 { 00:11:52.623 "name": "BaseBdev4", 00:11:52.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.623 "is_configured": false, 00:11:52.623 "data_offset": 0, 00:11:52.623 "data_size": 0 00:11:52.623 } 00:11:52.623 ] 00:11:52.623 }' 00:11:52.623 10:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.623 10:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 [2024-11-20 10:34:56.232330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.883 [2024-11-20 10:34:56.232462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 [2024-11-20 10:34:56.244315] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.883 [2024-11-20 10:34:56.244435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.883 [2024-11-20 10:34:56.244467] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.883 [2024-11-20 10:34:56.244493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.883 [2024-11-20 10:34:56.244515] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.883 [2024-11-20 10:34:56.244538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.883 [2024-11-20 10:34:56.244559] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.883 [2024-11-20 10:34:56.244582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 [2024-11-20 10:34:56.295252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.883 BaseBdev1 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.884 [ 00:11:52.884 { 00:11:52.884 "name": "BaseBdev1", 00:11:52.884 "aliases": [ 00:11:52.884 "b0294b13-8228-4cbe-999b-7ecf85f94b64" 00:11:52.884 ], 00:11:52.884 "product_name": "Malloc disk", 00:11:52.884 "block_size": 512, 00:11:52.884 "num_blocks": 65536, 00:11:52.884 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:52.884 "assigned_rate_limits": { 00:11:52.884 "rw_ios_per_sec": 0, 00:11:52.884 "rw_mbytes_per_sec": 0, 00:11:52.884 "r_mbytes_per_sec": 0, 00:11:52.884 "w_mbytes_per_sec": 0 00:11:52.884 }, 00:11:52.884 "claimed": true, 00:11:52.884 "claim_type": "exclusive_write", 00:11:52.884 "zoned": false, 00:11:52.884 "supported_io_types": { 00:11:52.884 "read": true, 00:11:52.884 "write": true, 00:11:52.884 "unmap": true, 00:11:52.884 "flush": true, 00:11:52.884 "reset": true, 00:11:52.884 "nvme_admin": false, 00:11:52.884 "nvme_io": false, 00:11:52.884 "nvme_io_md": false, 00:11:52.884 "write_zeroes": true, 00:11:52.884 "zcopy": true, 00:11:52.884 "get_zone_info": false, 00:11:52.884 "zone_management": false, 00:11:52.884 "zone_append": false, 00:11:52.884 "compare": false, 00:11:52.884 "compare_and_write": false, 00:11:52.884 "abort": true, 00:11:52.884 "seek_hole": false, 00:11:52.884 "seek_data": false, 00:11:52.884 "copy": true, 00:11:52.884 "nvme_iov_md": false 00:11:52.884 }, 00:11:52.884 "memory_domains": [ 00:11:52.884 { 00:11:52.884 "dma_device_id": "system", 00:11:52.884 "dma_device_type": 1 00:11:52.884 }, 00:11:52.884 { 00:11:52.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.884 "dma_device_type": 2 00:11:52.884 } 00:11:52.884 ], 00:11:52.884 "driver_specific": {} 00:11:52.884 } 00:11:52.884 ] 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.884 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.143 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.143 "name": "Existed_Raid", 00:11:53.143 "uuid": "3f8768e7-144a-4d1e-89a4-99c98708b652", 00:11:53.143 "strip_size_kb": 64, 00:11:53.143 "state": "configuring", 00:11:53.143 "raid_level": "concat", 00:11:53.143 "superblock": true, 00:11:53.143 "num_base_bdevs": 4, 00:11:53.143 "num_base_bdevs_discovered": 1, 00:11:53.143 "num_base_bdevs_operational": 4, 00:11:53.143 "base_bdevs_list": [ 00:11:53.143 { 00:11:53.143 "name": "BaseBdev1", 00:11:53.143 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:53.143 "is_configured": true, 00:11:53.143 "data_offset": 2048, 00:11:53.143 "data_size": 63488 00:11:53.143 }, 00:11:53.143 { 00:11:53.143 "name": "BaseBdev2", 00:11:53.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.143 "is_configured": false, 00:11:53.143 "data_offset": 0, 00:11:53.143 "data_size": 0 00:11:53.143 }, 00:11:53.143 { 00:11:53.143 "name": "BaseBdev3", 00:11:53.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.143 "is_configured": false, 00:11:53.143 "data_offset": 0, 00:11:53.143 "data_size": 0 00:11:53.143 }, 00:11:53.143 { 00:11:53.143 "name": "BaseBdev4", 00:11:53.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.143 "is_configured": false, 00:11:53.143 "data_offset": 0, 00:11:53.143 "data_size": 0 00:11:53.143 } 00:11:53.143 ] 00:11:53.143 }' 00:11:53.143 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.143 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.403 [2024-11-20 10:34:56.826463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.403 [2024-11-20 10:34:56.826522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.403 [2024-11-20 10:34:56.834523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.403 [2024-11-20 10:34:56.836592] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.403 [2024-11-20 10:34:56.836642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.403 [2024-11-20 10:34:56.836654] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.403 [2024-11-20 10:34:56.836668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.403 [2024-11-20 10:34:56.836676] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.403 [2024-11-20 10:34:56.836685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.403 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.660 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.660 "name": "Existed_Raid", 00:11:53.660 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:53.660 "strip_size_kb": 64, 00:11:53.660 "state": "configuring", 00:11:53.660 "raid_level": "concat", 00:11:53.660 "superblock": true, 00:11:53.660 "num_base_bdevs": 4, 00:11:53.660 "num_base_bdevs_discovered": 1, 00:11:53.660 "num_base_bdevs_operational": 4, 00:11:53.660 "base_bdevs_list": [ 00:11:53.660 { 00:11:53.660 "name": "BaseBdev1", 00:11:53.660 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:53.660 "is_configured": true, 00:11:53.660 "data_offset": 2048, 00:11:53.660 "data_size": 63488 00:11:53.660 }, 00:11:53.660 { 00:11:53.660 "name": "BaseBdev2", 00:11:53.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.660 "is_configured": false, 00:11:53.660 "data_offset": 0, 00:11:53.660 "data_size": 0 00:11:53.660 }, 00:11:53.660 { 00:11:53.660 "name": "BaseBdev3", 00:11:53.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.661 "is_configured": false, 00:11:53.661 "data_offset": 0, 00:11:53.661 "data_size": 0 00:11:53.661 }, 00:11:53.661 { 00:11:53.661 "name": "BaseBdev4", 00:11:53.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.661 "is_configured": false, 00:11:53.661 "data_offset": 0, 00:11:53.661 "data_size": 0 00:11:53.661 } 00:11:53.661 ] 00:11:53.661 }' 00:11:53.661 10:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.661 10:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.920 [2024-11-20 10:34:57.299251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.920 BaseBdev2 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.920 [ 00:11:53.920 { 00:11:53.920 "name": "BaseBdev2", 00:11:53.920 "aliases": [ 00:11:53.920 "0cb9bfed-a033-4ec8-8753-f3c8043549dc" 00:11:53.920 ], 00:11:53.920 "product_name": "Malloc disk", 00:11:53.920 "block_size": 512, 00:11:53.920 "num_blocks": 65536, 00:11:53.920 "uuid": "0cb9bfed-a033-4ec8-8753-f3c8043549dc", 00:11:53.920 "assigned_rate_limits": { 00:11:53.920 "rw_ios_per_sec": 0, 00:11:53.920 "rw_mbytes_per_sec": 0, 00:11:53.920 "r_mbytes_per_sec": 0, 00:11:53.920 "w_mbytes_per_sec": 0 00:11:53.920 }, 00:11:53.920 "claimed": true, 00:11:53.920 "claim_type": "exclusive_write", 00:11:53.920 "zoned": false, 00:11:53.920 "supported_io_types": { 00:11:53.920 "read": true, 00:11:53.920 "write": true, 00:11:53.920 "unmap": true, 00:11:53.920 "flush": true, 00:11:53.920 "reset": true, 00:11:53.920 "nvme_admin": false, 00:11:53.920 "nvme_io": false, 00:11:53.920 "nvme_io_md": false, 00:11:53.920 "write_zeroes": true, 00:11:53.920 "zcopy": true, 00:11:53.920 "get_zone_info": false, 00:11:53.920 "zone_management": false, 00:11:53.920 "zone_append": false, 00:11:53.920 "compare": false, 00:11:53.920 "compare_and_write": false, 00:11:53.920 "abort": true, 00:11:53.920 "seek_hole": false, 00:11:53.920 "seek_data": false, 00:11:53.920 "copy": true, 00:11:53.920 "nvme_iov_md": false 00:11:53.920 }, 00:11:53.920 "memory_domains": [ 00:11:53.920 { 00:11:53.920 "dma_device_id": "system", 00:11:53.920 "dma_device_type": 1 00:11:53.920 }, 00:11:53.920 { 00:11:53.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.920 "dma_device_type": 2 00:11:53.920 } 00:11:53.920 ], 00:11:53.920 "driver_specific": {} 00:11:53.920 } 00:11:53.920 ] 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.920 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.921 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.921 "name": "Existed_Raid", 00:11:53.921 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:53.921 "strip_size_kb": 64, 00:11:53.921 "state": "configuring", 00:11:53.921 "raid_level": "concat", 00:11:53.921 "superblock": true, 00:11:53.921 "num_base_bdevs": 4, 00:11:53.921 "num_base_bdevs_discovered": 2, 00:11:53.921 "num_base_bdevs_operational": 4, 00:11:53.921 "base_bdevs_list": [ 00:11:53.921 { 00:11:53.921 "name": "BaseBdev1", 00:11:53.921 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:53.921 "is_configured": true, 00:11:53.921 "data_offset": 2048, 00:11:53.921 "data_size": 63488 00:11:53.921 }, 00:11:53.921 { 00:11:53.921 "name": "BaseBdev2", 00:11:53.921 "uuid": "0cb9bfed-a033-4ec8-8753-f3c8043549dc", 00:11:53.921 "is_configured": true, 00:11:53.921 "data_offset": 2048, 00:11:53.921 "data_size": 63488 00:11:53.921 }, 00:11:53.921 { 00:11:53.921 "name": "BaseBdev3", 00:11:53.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.921 "is_configured": false, 00:11:53.921 "data_offset": 0, 00:11:53.921 "data_size": 0 00:11:53.921 }, 00:11:53.921 { 00:11:53.921 "name": "BaseBdev4", 00:11:53.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.921 "is_configured": false, 00:11:53.921 "data_offset": 0, 00:11:53.921 "data_size": 0 00:11:53.921 } 00:11:53.921 ] 00:11:53.921 }' 00:11:53.921 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.921 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.490 [2024-11-20 10:34:57.850943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.490 BaseBdev3 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.490 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.491 [ 00:11:54.491 { 00:11:54.491 "name": "BaseBdev3", 00:11:54.491 "aliases": [ 00:11:54.491 "4980e9f0-47f7-49e0-b2a7-b49205b629d6" 00:11:54.491 ], 00:11:54.491 "product_name": "Malloc disk", 00:11:54.491 "block_size": 512, 00:11:54.491 "num_blocks": 65536, 00:11:54.491 "uuid": "4980e9f0-47f7-49e0-b2a7-b49205b629d6", 00:11:54.491 "assigned_rate_limits": { 00:11:54.491 "rw_ios_per_sec": 0, 00:11:54.491 "rw_mbytes_per_sec": 0, 00:11:54.491 "r_mbytes_per_sec": 0, 00:11:54.491 "w_mbytes_per_sec": 0 00:11:54.491 }, 00:11:54.491 "claimed": true, 00:11:54.491 "claim_type": "exclusive_write", 00:11:54.491 "zoned": false, 00:11:54.491 "supported_io_types": { 00:11:54.491 "read": true, 00:11:54.491 "write": true, 00:11:54.491 "unmap": true, 00:11:54.491 "flush": true, 00:11:54.491 "reset": true, 00:11:54.491 "nvme_admin": false, 00:11:54.491 "nvme_io": false, 00:11:54.491 "nvme_io_md": false, 00:11:54.491 "write_zeroes": true, 00:11:54.491 "zcopy": true, 00:11:54.491 "get_zone_info": false, 00:11:54.491 "zone_management": false, 00:11:54.491 "zone_append": false, 00:11:54.491 "compare": false, 00:11:54.491 "compare_and_write": false, 00:11:54.491 "abort": true, 00:11:54.491 "seek_hole": false, 00:11:54.491 "seek_data": false, 00:11:54.491 "copy": true, 00:11:54.491 "nvme_iov_md": false 00:11:54.491 }, 00:11:54.491 "memory_domains": [ 00:11:54.491 { 00:11:54.491 "dma_device_id": "system", 00:11:54.491 "dma_device_type": 1 00:11:54.491 }, 00:11:54.491 { 00:11:54.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.491 "dma_device_type": 2 00:11:54.491 } 00:11:54.491 ], 00:11:54.491 "driver_specific": {} 00:11:54.491 } 00:11:54.491 ] 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.491 "name": "Existed_Raid", 00:11:54.491 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:54.491 "strip_size_kb": 64, 00:11:54.491 "state": "configuring", 00:11:54.491 "raid_level": "concat", 00:11:54.491 "superblock": true, 00:11:54.491 "num_base_bdevs": 4, 00:11:54.491 "num_base_bdevs_discovered": 3, 00:11:54.491 "num_base_bdevs_operational": 4, 00:11:54.491 "base_bdevs_list": [ 00:11:54.491 { 00:11:54.491 "name": "BaseBdev1", 00:11:54.491 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:54.491 "is_configured": true, 00:11:54.491 "data_offset": 2048, 00:11:54.491 "data_size": 63488 00:11:54.491 }, 00:11:54.491 { 00:11:54.491 "name": "BaseBdev2", 00:11:54.491 "uuid": "0cb9bfed-a033-4ec8-8753-f3c8043549dc", 00:11:54.491 "is_configured": true, 00:11:54.491 "data_offset": 2048, 00:11:54.491 "data_size": 63488 00:11:54.491 }, 00:11:54.491 { 00:11:54.491 "name": "BaseBdev3", 00:11:54.491 "uuid": "4980e9f0-47f7-49e0-b2a7-b49205b629d6", 00:11:54.491 "is_configured": true, 00:11:54.491 "data_offset": 2048, 00:11:54.491 "data_size": 63488 00:11:54.491 }, 00:11:54.491 { 00:11:54.491 "name": "BaseBdev4", 00:11:54.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.491 "is_configured": false, 00:11:54.491 "data_offset": 0, 00:11:54.491 "data_size": 0 00:11:54.491 } 00:11:54.491 ] 00:11:54.491 }' 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.491 10:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.060 [2024-11-20 10:34:58.393439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.060 [2024-11-20 10:34:58.393808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:55.060 [2024-11-20 10:34:58.393862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.060 [2024-11-20 10:34:58.394151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:55.060 BaseBdev4 00:11:55.060 [2024-11-20 10:34:58.394363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:55.060 [2024-11-20 10:34:58.394379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:55.060 [2024-11-20 10:34:58.394541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.060 [ 00:11:55.060 { 00:11:55.060 "name": "BaseBdev4", 00:11:55.060 "aliases": [ 00:11:55.060 "5bc6cdb1-a206-4a46-b964-8a0b8d1ec77f" 00:11:55.060 ], 00:11:55.060 "product_name": "Malloc disk", 00:11:55.060 "block_size": 512, 00:11:55.060 "num_blocks": 65536, 00:11:55.060 "uuid": "5bc6cdb1-a206-4a46-b964-8a0b8d1ec77f", 00:11:55.060 "assigned_rate_limits": { 00:11:55.060 "rw_ios_per_sec": 0, 00:11:55.060 "rw_mbytes_per_sec": 0, 00:11:55.060 "r_mbytes_per_sec": 0, 00:11:55.060 "w_mbytes_per_sec": 0 00:11:55.060 }, 00:11:55.060 "claimed": true, 00:11:55.060 "claim_type": "exclusive_write", 00:11:55.060 "zoned": false, 00:11:55.060 "supported_io_types": { 00:11:55.060 "read": true, 00:11:55.060 "write": true, 00:11:55.060 "unmap": true, 00:11:55.060 "flush": true, 00:11:55.060 "reset": true, 00:11:55.060 "nvme_admin": false, 00:11:55.060 "nvme_io": false, 00:11:55.060 "nvme_io_md": false, 00:11:55.060 "write_zeroes": true, 00:11:55.060 "zcopy": true, 00:11:55.060 "get_zone_info": false, 00:11:55.060 "zone_management": false, 00:11:55.060 "zone_append": false, 00:11:55.060 "compare": false, 00:11:55.060 "compare_and_write": false, 00:11:55.060 "abort": true, 00:11:55.060 "seek_hole": false, 00:11:55.060 "seek_data": false, 00:11:55.060 "copy": true, 00:11:55.060 "nvme_iov_md": false 00:11:55.060 }, 00:11:55.060 "memory_domains": [ 00:11:55.060 { 00:11:55.060 "dma_device_id": "system", 00:11:55.060 "dma_device_type": 1 00:11:55.060 }, 00:11:55.060 { 00:11:55.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.060 "dma_device_type": 2 00:11:55.060 } 00:11:55.060 ], 00:11:55.060 "driver_specific": {} 00:11:55.060 } 00:11:55.060 ] 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.060 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.061 "name": "Existed_Raid", 00:11:55.061 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:55.061 "strip_size_kb": 64, 00:11:55.061 "state": "online", 00:11:55.061 "raid_level": "concat", 00:11:55.061 "superblock": true, 00:11:55.061 "num_base_bdevs": 4, 00:11:55.061 "num_base_bdevs_discovered": 4, 00:11:55.061 "num_base_bdevs_operational": 4, 00:11:55.061 "base_bdevs_list": [ 00:11:55.061 { 00:11:55.061 "name": "BaseBdev1", 00:11:55.061 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:55.061 "is_configured": true, 00:11:55.061 "data_offset": 2048, 00:11:55.061 "data_size": 63488 00:11:55.061 }, 00:11:55.061 { 00:11:55.061 "name": "BaseBdev2", 00:11:55.061 "uuid": "0cb9bfed-a033-4ec8-8753-f3c8043549dc", 00:11:55.061 "is_configured": true, 00:11:55.061 "data_offset": 2048, 00:11:55.061 "data_size": 63488 00:11:55.061 }, 00:11:55.061 { 00:11:55.061 "name": "BaseBdev3", 00:11:55.061 "uuid": "4980e9f0-47f7-49e0-b2a7-b49205b629d6", 00:11:55.061 "is_configured": true, 00:11:55.061 "data_offset": 2048, 00:11:55.061 "data_size": 63488 00:11:55.061 }, 00:11:55.061 { 00:11:55.061 "name": "BaseBdev4", 00:11:55.061 "uuid": "5bc6cdb1-a206-4a46-b964-8a0b8d1ec77f", 00:11:55.061 "is_configured": true, 00:11:55.061 "data_offset": 2048, 00:11:55.061 "data_size": 63488 00:11:55.061 } 00:11:55.061 ] 00:11:55.061 }' 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.061 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.629 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.630 [2024-11-20 10:34:58.857035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:55.630 "name": "Existed_Raid", 00:11:55.630 "aliases": [ 00:11:55.630 "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee" 00:11:55.630 ], 00:11:55.630 "product_name": "Raid Volume", 00:11:55.630 "block_size": 512, 00:11:55.630 "num_blocks": 253952, 00:11:55.630 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:55.630 "assigned_rate_limits": { 00:11:55.630 "rw_ios_per_sec": 0, 00:11:55.630 "rw_mbytes_per_sec": 0, 00:11:55.630 "r_mbytes_per_sec": 0, 00:11:55.630 "w_mbytes_per_sec": 0 00:11:55.630 }, 00:11:55.630 "claimed": false, 00:11:55.630 "zoned": false, 00:11:55.630 "supported_io_types": { 00:11:55.630 "read": true, 00:11:55.630 "write": true, 00:11:55.630 "unmap": true, 00:11:55.630 "flush": true, 00:11:55.630 "reset": true, 00:11:55.630 "nvme_admin": false, 00:11:55.630 "nvme_io": false, 00:11:55.630 "nvme_io_md": false, 00:11:55.630 "write_zeroes": true, 00:11:55.630 "zcopy": false, 00:11:55.630 "get_zone_info": false, 00:11:55.630 "zone_management": false, 00:11:55.630 "zone_append": false, 00:11:55.630 "compare": false, 00:11:55.630 "compare_and_write": false, 00:11:55.630 "abort": false, 00:11:55.630 "seek_hole": false, 00:11:55.630 "seek_data": false, 00:11:55.630 "copy": false, 00:11:55.630 "nvme_iov_md": false 00:11:55.630 }, 00:11:55.630 "memory_domains": [ 00:11:55.630 { 00:11:55.630 "dma_device_id": "system", 00:11:55.630 "dma_device_type": 1 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.630 "dma_device_type": 2 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "system", 00:11:55.630 "dma_device_type": 1 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.630 "dma_device_type": 2 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "system", 00:11:55.630 "dma_device_type": 1 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.630 "dma_device_type": 2 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "system", 00:11:55.630 "dma_device_type": 1 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.630 "dma_device_type": 2 00:11:55.630 } 00:11:55.630 ], 00:11:55.630 "driver_specific": { 00:11:55.630 "raid": { 00:11:55.630 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:55.630 "strip_size_kb": 64, 00:11:55.630 "state": "online", 00:11:55.630 "raid_level": "concat", 00:11:55.630 "superblock": true, 00:11:55.630 "num_base_bdevs": 4, 00:11:55.630 "num_base_bdevs_discovered": 4, 00:11:55.630 "num_base_bdevs_operational": 4, 00:11:55.630 "base_bdevs_list": [ 00:11:55.630 { 00:11:55.630 "name": "BaseBdev1", 00:11:55.630 "uuid": "b0294b13-8228-4cbe-999b-7ecf85f94b64", 00:11:55.630 "is_configured": true, 00:11:55.630 "data_offset": 2048, 00:11:55.630 "data_size": 63488 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "name": "BaseBdev2", 00:11:55.630 "uuid": "0cb9bfed-a033-4ec8-8753-f3c8043549dc", 00:11:55.630 "is_configured": true, 00:11:55.630 "data_offset": 2048, 00:11:55.630 "data_size": 63488 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "name": "BaseBdev3", 00:11:55.630 "uuid": "4980e9f0-47f7-49e0-b2a7-b49205b629d6", 00:11:55.630 "is_configured": true, 00:11:55.630 "data_offset": 2048, 00:11:55.630 "data_size": 63488 00:11:55.630 }, 00:11:55.630 { 00:11:55.630 "name": "BaseBdev4", 00:11:55.630 "uuid": "5bc6cdb1-a206-4a46-b964-8a0b8d1ec77f", 00:11:55.630 "is_configured": true, 00:11:55.630 "data_offset": 2048, 00:11:55.630 "data_size": 63488 00:11:55.630 } 00:11:55.630 ] 00:11:55.630 } 00:11:55.630 } 00:11:55.630 }' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:55.630 BaseBdev2 00:11:55.630 BaseBdev3 00:11:55.630 BaseBdev4' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.630 10:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.630 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.895 [2024-11-20 10:34:59.152283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.895 [2024-11-20 10:34:59.152392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.895 [2024-11-20 10:34:59.152459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.895 "name": "Existed_Raid", 00:11:55.895 "uuid": "ed117f91-8daf-4ecc-b1d8-7a32ab5c61ee", 00:11:55.895 "strip_size_kb": 64, 00:11:55.895 "state": "offline", 00:11:55.895 "raid_level": "concat", 00:11:55.895 "superblock": true, 00:11:55.895 "num_base_bdevs": 4, 00:11:55.895 "num_base_bdevs_discovered": 3, 00:11:55.895 "num_base_bdevs_operational": 3, 00:11:55.895 "base_bdevs_list": [ 00:11:55.895 { 00:11:55.895 "name": null, 00:11:55.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.895 "is_configured": false, 00:11:55.895 "data_offset": 0, 00:11:55.895 "data_size": 63488 00:11:55.895 }, 00:11:55.895 { 00:11:55.895 "name": "BaseBdev2", 00:11:55.895 "uuid": "0cb9bfed-a033-4ec8-8753-f3c8043549dc", 00:11:55.895 "is_configured": true, 00:11:55.895 "data_offset": 2048, 00:11:55.895 "data_size": 63488 00:11:55.895 }, 00:11:55.895 { 00:11:55.895 "name": "BaseBdev3", 00:11:55.895 "uuid": "4980e9f0-47f7-49e0-b2a7-b49205b629d6", 00:11:55.895 "is_configured": true, 00:11:55.895 "data_offset": 2048, 00:11:55.895 "data_size": 63488 00:11:55.895 }, 00:11:55.895 { 00:11:55.895 "name": "BaseBdev4", 00:11:55.895 "uuid": "5bc6cdb1-a206-4a46-b964-8a0b8d1ec77f", 00:11:55.895 "is_configured": true, 00:11:55.895 "data_offset": 2048, 00:11:55.895 "data_size": 63488 00:11:55.895 } 00:11:55.895 ] 00:11:55.895 }' 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.895 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.465 [2024-11-20 10:34:59.783624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.465 10:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.465 [2024-11-20 10:34:59.936177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 [2024-11-20 10:35:00.083686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:56.724 [2024-11-20 10:35:00.083818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:56.724 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.984 BaseBdev2 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.984 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 [ 00:11:56.985 { 00:11:56.985 "name": "BaseBdev2", 00:11:56.985 "aliases": [ 00:11:56.985 "631f0b46-7883-4a7b-922d-aadd3f446df2" 00:11:56.985 ], 00:11:56.985 "product_name": "Malloc disk", 00:11:56.985 "block_size": 512, 00:11:56.985 "num_blocks": 65536, 00:11:56.985 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:56.985 "assigned_rate_limits": { 00:11:56.985 "rw_ios_per_sec": 0, 00:11:56.985 "rw_mbytes_per_sec": 0, 00:11:56.985 "r_mbytes_per_sec": 0, 00:11:56.985 "w_mbytes_per_sec": 0 00:11:56.985 }, 00:11:56.985 "claimed": false, 00:11:56.985 "zoned": false, 00:11:56.985 "supported_io_types": { 00:11:56.985 "read": true, 00:11:56.985 "write": true, 00:11:56.985 "unmap": true, 00:11:56.985 "flush": true, 00:11:56.985 "reset": true, 00:11:56.985 "nvme_admin": false, 00:11:56.985 "nvme_io": false, 00:11:56.985 "nvme_io_md": false, 00:11:56.985 "write_zeroes": true, 00:11:56.985 "zcopy": true, 00:11:56.985 "get_zone_info": false, 00:11:56.985 "zone_management": false, 00:11:56.985 "zone_append": false, 00:11:56.985 "compare": false, 00:11:56.985 "compare_and_write": false, 00:11:56.985 "abort": true, 00:11:56.985 "seek_hole": false, 00:11:56.985 "seek_data": false, 00:11:56.985 "copy": true, 00:11:56.985 "nvme_iov_md": false 00:11:56.985 }, 00:11:56.985 "memory_domains": [ 00:11:56.985 { 00:11:56.985 "dma_device_id": "system", 00:11:56.985 "dma_device_type": 1 00:11:56.985 }, 00:11:56.985 { 00:11:56.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.985 "dma_device_type": 2 00:11:56.985 } 00:11:56.985 ], 00:11:56.985 "driver_specific": {} 00:11:56.985 } 00:11:56.985 ] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 BaseBdev3 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 [ 00:11:56.985 { 00:11:56.985 "name": "BaseBdev3", 00:11:56.985 "aliases": [ 00:11:56.985 "759f8e57-cbbf-4a84-9e15-5a54559579dc" 00:11:56.985 ], 00:11:56.985 "product_name": "Malloc disk", 00:11:56.985 "block_size": 512, 00:11:56.985 "num_blocks": 65536, 00:11:56.985 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:56.985 "assigned_rate_limits": { 00:11:56.985 "rw_ios_per_sec": 0, 00:11:56.985 "rw_mbytes_per_sec": 0, 00:11:56.985 "r_mbytes_per_sec": 0, 00:11:56.985 "w_mbytes_per_sec": 0 00:11:56.985 }, 00:11:56.985 "claimed": false, 00:11:56.985 "zoned": false, 00:11:56.985 "supported_io_types": { 00:11:56.985 "read": true, 00:11:56.985 "write": true, 00:11:56.985 "unmap": true, 00:11:56.985 "flush": true, 00:11:56.985 "reset": true, 00:11:56.985 "nvme_admin": false, 00:11:56.985 "nvme_io": false, 00:11:56.985 "nvme_io_md": false, 00:11:56.985 "write_zeroes": true, 00:11:56.985 "zcopy": true, 00:11:56.985 "get_zone_info": false, 00:11:56.985 "zone_management": false, 00:11:56.985 "zone_append": false, 00:11:56.985 "compare": false, 00:11:56.985 "compare_and_write": false, 00:11:56.985 "abort": true, 00:11:56.985 "seek_hole": false, 00:11:56.985 "seek_data": false, 00:11:56.985 "copy": true, 00:11:56.985 "nvme_iov_md": false 00:11:56.985 }, 00:11:56.985 "memory_domains": [ 00:11:56.985 { 00:11:56.985 "dma_device_id": "system", 00:11:56.985 "dma_device_type": 1 00:11:56.985 }, 00:11:56.985 { 00:11:56.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.985 "dma_device_type": 2 00:11:56.985 } 00:11:56.985 ], 00:11:56.985 "driver_specific": {} 00:11:56.985 } 00:11:56.985 ] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 BaseBdev4 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.985 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.244 [ 00:11:57.244 { 00:11:57.244 "name": "BaseBdev4", 00:11:57.244 "aliases": [ 00:11:57.244 "da62aea7-b90f-4c59-8695-7d3d5e203f9d" 00:11:57.244 ], 00:11:57.244 "product_name": "Malloc disk", 00:11:57.244 "block_size": 512, 00:11:57.244 "num_blocks": 65536, 00:11:57.244 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:57.244 "assigned_rate_limits": { 00:11:57.244 "rw_ios_per_sec": 0, 00:11:57.244 "rw_mbytes_per_sec": 0, 00:11:57.244 "r_mbytes_per_sec": 0, 00:11:57.244 "w_mbytes_per_sec": 0 00:11:57.244 }, 00:11:57.244 "claimed": false, 00:11:57.244 "zoned": false, 00:11:57.244 "supported_io_types": { 00:11:57.244 "read": true, 00:11:57.244 "write": true, 00:11:57.244 "unmap": true, 00:11:57.244 "flush": true, 00:11:57.244 "reset": true, 00:11:57.244 "nvme_admin": false, 00:11:57.244 "nvme_io": false, 00:11:57.244 "nvme_io_md": false, 00:11:57.244 "write_zeroes": true, 00:11:57.244 "zcopy": true, 00:11:57.244 "get_zone_info": false, 00:11:57.244 "zone_management": false, 00:11:57.244 "zone_append": false, 00:11:57.244 "compare": false, 00:11:57.244 "compare_and_write": false, 00:11:57.244 "abort": true, 00:11:57.244 "seek_hole": false, 00:11:57.244 "seek_data": false, 00:11:57.244 "copy": true, 00:11:57.244 "nvme_iov_md": false 00:11:57.244 }, 00:11:57.244 "memory_domains": [ 00:11:57.244 { 00:11:57.244 "dma_device_id": "system", 00:11:57.244 "dma_device_type": 1 00:11:57.244 }, 00:11:57.244 { 00:11:57.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.244 "dma_device_type": 2 00:11:57.244 } 00:11:57.244 ], 00:11:57.244 "driver_specific": {} 00:11:57.244 } 00:11:57.244 ] 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.244 [2024-11-20 10:35:00.488471] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.244 [2024-11-20 10:35:00.488558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.244 [2024-11-20 10:35:00.488603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.244 [2024-11-20 10:35:00.490482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.244 [2024-11-20 10:35:00.490530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.244 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.245 "name": "Existed_Raid", 00:11:57.245 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:11:57.245 "strip_size_kb": 64, 00:11:57.245 "state": "configuring", 00:11:57.245 "raid_level": "concat", 00:11:57.245 "superblock": true, 00:11:57.245 "num_base_bdevs": 4, 00:11:57.245 "num_base_bdevs_discovered": 3, 00:11:57.245 "num_base_bdevs_operational": 4, 00:11:57.245 "base_bdevs_list": [ 00:11:57.245 { 00:11:57.245 "name": "BaseBdev1", 00:11:57.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.245 "is_configured": false, 00:11:57.245 "data_offset": 0, 00:11:57.245 "data_size": 0 00:11:57.245 }, 00:11:57.245 { 00:11:57.245 "name": "BaseBdev2", 00:11:57.245 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:57.245 "is_configured": true, 00:11:57.245 "data_offset": 2048, 00:11:57.245 "data_size": 63488 00:11:57.245 }, 00:11:57.245 { 00:11:57.245 "name": "BaseBdev3", 00:11:57.245 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:57.245 "is_configured": true, 00:11:57.245 "data_offset": 2048, 00:11:57.245 "data_size": 63488 00:11:57.245 }, 00:11:57.245 { 00:11:57.245 "name": "BaseBdev4", 00:11:57.245 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:57.245 "is_configured": true, 00:11:57.245 "data_offset": 2048, 00:11:57.245 "data_size": 63488 00:11:57.245 } 00:11:57.245 ] 00:11:57.245 }' 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.245 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.813 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:57.813 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.813 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.813 [2024-11-20 10:35:00.995624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.813 10:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.813 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:57.813 10:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.813 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.813 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.813 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.814 "name": "Existed_Raid", 00:11:57.814 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:11:57.814 "strip_size_kb": 64, 00:11:57.814 "state": "configuring", 00:11:57.814 "raid_level": "concat", 00:11:57.814 "superblock": true, 00:11:57.814 "num_base_bdevs": 4, 00:11:57.814 "num_base_bdevs_discovered": 2, 00:11:57.814 "num_base_bdevs_operational": 4, 00:11:57.814 "base_bdevs_list": [ 00:11:57.814 { 00:11:57.814 "name": "BaseBdev1", 00:11:57.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.814 "is_configured": false, 00:11:57.814 "data_offset": 0, 00:11:57.814 "data_size": 0 00:11:57.814 }, 00:11:57.814 { 00:11:57.814 "name": null, 00:11:57.814 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:57.814 "is_configured": false, 00:11:57.814 "data_offset": 0, 00:11:57.814 "data_size": 63488 00:11:57.814 }, 00:11:57.814 { 00:11:57.814 "name": "BaseBdev3", 00:11:57.814 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:57.814 "is_configured": true, 00:11:57.814 "data_offset": 2048, 00:11:57.814 "data_size": 63488 00:11:57.814 }, 00:11:57.814 { 00:11:57.814 "name": "BaseBdev4", 00:11:57.814 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:57.814 "is_configured": true, 00:11:57.814 "data_offset": 2048, 00:11:57.814 "data_size": 63488 00:11:57.814 } 00:11:57.814 ] 00:11:57.814 }' 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.814 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.155 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.155 [2024-11-20 10:35:01.550038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.155 BaseBdev1 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.156 [ 00:11:58.156 { 00:11:58.156 "name": "BaseBdev1", 00:11:58.156 "aliases": [ 00:11:58.156 "8c607d21-94ce-426d-b798-da5850308e6c" 00:11:58.156 ], 00:11:58.156 "product_name": "Malloc disk", 00:11:58.156 "block_size": 512, 00:11:58.156 "num_blocks": 65536, 00:11:58.156 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:11:58.156 "assigned_rate_limits": { 00:11:58.156 "rw_ios_per_sec": 0, 00:11:58.156 "rw_mbytes_per_sec": 0, 00:11:58.156 "r_mbytes_per_sec": 0, 00:11:58.156 "w_mbytes_per_sec": 0 00:11:58.156 }, 00:11:58.156 "claimed": true, 00:11:58.156 "claim_type": "exclusive_write", 00:11:58.156 "zoned": false, 00:11:58.156 "supported_io_types": { 00:11:58.156 "read": true, 00:11:58.156 "write": true, 00:11:58.156 "unmap": true, 00:11:58.156 "flush": true, 00:11:58.156 "reset": true, 00:11:58.156 "nvme_admin": false, 00:11:58.156 "nvme_io": false, 00:11:58.156 "nvme_io_md": false, 00:11:58.156 "write_zeroes": true, 00:11:58.156 "zcopy": true, 00:11:58.156 "get_zone_info": false, 00:11:58.156 "zone_management": false, 00:11:58.156 "zone_append": false, 00:11:58.156 "compare": false, 00:11:58.156 "compare_and_write": false, 00:11:58.156 "abort": true, 00:11:58.156 "seek_hole": false, 00:11:58.156 "seek_data": false, 00:11:58.156 "copy": true, 00:11:58.156 "nvme_iov_md": false 00:11:58.156 }, 00:11:58.156 "memory_domains": [ 00:11:58.156 { 00:11:58.156 "dma_device_id": "system", 00:11:58.156 "dma_device_type": 1 00:11:58.156 }, 00:11:58.156 { 00:11:58.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.156 "dma_device_type": 2 00:11:58.156 } 00:11:58.156 ], 00:11:58.156 "driver_specific": {} 00:11:58.156 } 00:11:58.156 ] 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.156 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.418 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.418 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.418 "name": "Existed_Raid", 00:11:58.418 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:11:58.418 "strip_size_kb": 64, 00:11:58.418 "state": "configuring", 00:11:58.418 "raid_level": "concat", 00:11:58.418 "superblock": true, 00:11:58.418 "num_base_bdevs": 4, 00:11:58.418 "num_base_bdevs_discovered": 3, 00:11:58.418 "num_base_bdevs_operational": 4, 00:11:58.418 "base_bdevs_list": [ 00:11:58.418 { 00:11:58.418 "name": "BaseBdev1", 00:11:58.418 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:11:58.418 "is_configured": true, 00:11:58.418 "data_offset": 2048, 00:11:58.418 "data_size": 63488 00:11:58.418 }, 00:11:58.418 { 00:11:58.418 "name": null, 00:11:58.418 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:58.418 "is_configured": false, 00:11:58.418 "data_offset": 0, 00:11:58.418 "data_size": 63488 00:11:58.418 }, 00:11:58.418 { 00:11:58.418 "name": "BaseBdev3", 00:11:58.418 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:58.418 "is_configured": true, 00:11:58.418 "data_offset": 2048, 00:11:58.418 "data_size": 63488 00:11:58.418 }, 00:11:58.418 { 00:11:58.418 "name": "BaseBdev4", 00:11:58.418 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:58.418 "is_configured": true, 00:11:58.418 "data_offset": 2048, 00:11:58.418 "data_size": 63488 00:11:58.418 } 00:11:58.418 ] 00:11:58.418 }' 00:11:58.418 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.418 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.679 [2024-11-20 10:35:02.109186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.679 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.940 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.940 "name": "Existed_Raid", 00:11:58.940 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:11:58.940 "strip_size_kb": 64, 00:11:58.940 "state": "configuring", 00:11:58.940 "raid_level": "concat", 00:11:58.940 "superblock": true, 00:11:58.940 "num_base_bdevs": 4, 00:11:58.940 "num_base_bdevs_discovered": 2, 00:11:58.940 "num_base_bdevs_operational": 4, 00:11:58.940 "base_bdevs_list": [ 00:11:58.940 { 00:11:58.940 "name": "BaseBdev1", 00:11:58.940 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:11:58.940 "is_configured": true, 00:11:58.940 "data_offset": 2048, 00:11:58.940 "data_size": 63488 00:11:58.940 }, 00:11:58.940 { 00:11:58.940 "name": null, 00:11:58.940 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:58.940 "is_configured": false, 00:11:58.940 "data_offset": 0, 00:11:58.940 "data_size": 63488 00:11:58.940 }, 00:11:58.940 { 00:11:58.940 "name": null, 00:11:58.940 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:58.940 "is_configured": false, 00:11:58.940 "data_offset": 0, 00:11:58.940 "data_size": 63488 00:11:58.940 }, 00:11:58.940 { 00:11:58.940 "name": "BaseBdev4", 00:11:58.940 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:58.940 "is_configured": true, 00:11:58.940 "data_offset": 2048, 00:11:58.940 "data_size": 63488 00:11:58.940 } 00:11:58.940 ] 00:11:58.940 }' 00:11:58.940 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.940 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.200 [2024-11-20 10:35:02.628319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.200 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.461 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.461 "name": "Existed_Raid", 00:11:59.461 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:11:59.461 "strip_size_kb": 64, 00:11:59.461 "state": "configuring", 00:11:59.461 "raid_level": "concat", 00:11:59.461 "superblock": true, 00:11:59.461 "num_base_bdevs": 4, 00:11:59.461 "num_base_bdevs_discovered": 3, 00:11:59.461 "num_base_bdevs_operational": 4, 00:11:59.461 "base_bdevs_list": [ 00:11:59.461 { 00:11:59.461 "name": "BaseBdev1", 00:11:59.461 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:11:59.461 "is_configured": true, 00:11:59.461 "data_offset": 2048, 00:11:59.461 "data_size": 63488 00:11:59.461 }, 00:11:59.461 { 00:11:59.461 "name": null, 00:11:59.461 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:59.461 "is_configured": false, 00:11:59.461 "data_offset": 0, 00:11:59.461 "data_size": 63488 00:11:59.461 }, 00:11:59.461 { 00:11:59.461 "name": "BaseBdev3", 00:11:59.461 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:59.461 "is_configured": true, 00:11:59.461 "data_offset": 2048, 00:11:59.461 "data_size": 63488 00:11:59.461 }, 00:11:59.461 { 00:11:59.461 "name": "BaseBdev4", 00:11:59.461 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:59.461 "is_configured": true, 00:11:59.461 "data_offset": 2048, 00:11:59.461 "data_size": 63488 00:11:59.461 } 00:11:59.461 ] 00:11:59.461 }' 00:11:59.461 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.461 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.721 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.721 [2024-11-20 10:35:03.159486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.982 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.982 "name": "Existed_Raid", 00:11:59.982 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:11:59.982 "strip_size_kb": 64, 00:11:59.982 "state": "configuring", 00:11:59.982 "raid_level": "concat", 00:11:59.982 "superblock": true, 00:11:59.982 "num_base_bdevs": 4, 00:11:59.982 "num_base_bdevs_discovered": 2, 00:11:59.982 "num_base_bdevs_operational": 4, 00:11:59.982 "base_bdevs_list": [ 00:11:59.982 { 00:11:59.982 "name": null, 00:11:59.982 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:11:59.982 "is_configured": false, 00:11:59.982 "data_offset": 0, 00:11:59.982 "data_size": 63488 00:11:59.982 }, 00:11:59.982 { 00:11:59.983 "name": null, 00:11:59.983 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:11:59.983 "is_configured": false, 00:11:59.983 "data_offset": 0, 00:11:59.983 "data_size": 63488 00:11:59.983 }, 00:11:59.983 { 00:11:59.983 "name": "BaseBdev3", 00:11:59.983 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:11:59.983 "is_configured": true, 00:11:59.983 "data_offset": 2048, 00:11:59.983 "data_size": 63488 00:11:59.983 }, 00:11:59.983 { 00:11:59.983 "name": "BaseBdev4", 00:11:59.983 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:11:59.983 "is_configured": true, 00:11:59.983 "data_offset": 2048, 00:11:59.983 "data_size": 63488 00:11:59.983 } 00:11:59.983 ] 00:11:59.983 }' 00:11:59.983 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.983 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.554 [2024-11-20 10:35:03.802613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.554 "name": "Existed_Raid", 00:12:00.554 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:12:00.554 "strip_size_kb": 64, 00:12:00.554 "state": "configuring", 00:12:00.554 "raid_level": "concat", 00:12:00.554 "superblock": true, 00:12:00.554 "num_base_bdevs": 4, 00:12:00.554 "num_base_bdevs_discovered": 3, 00:12:00.554 "num_base_bdevs_operational": 4, 00:12:00.554 "base_bdevs_list": [ 00:12:00.554 { 00:12:00.554 "name": null, 00:12:00.554 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:12:00.554 "is_configured": false, 00:12:00.554 "data_offset": 0, 00:12:00.554 "data_size": 63488 00:12:00.554 }, 00:12:00.554 { 00:12:00.554 "name": "BaseBdev2", 00:12:00.554 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:12:00.554 "is_configured": true, 00:12:00.554 "data_offset": 2048, 00:12:00.554 "data_size": 63488 00:12:00.554 }, 00:12:00.554 { 00:12:00.554 "name": "BaseBdev3", 00:12:00.554 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:12:00.554 "is_configured": true, 00:12:00.554 "data_offset": 2048, 00:12:00.554 "data_size": 63488 00:12:00.554 }, 00:12:00.554 { 00:12:00.554 "name": "BaseBdev4", 00:12:00.554 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:12:00.554 "is_configured": true, 00:12:00.554 "data_offset": 2048, 00:12:00.554 "data_size": 63488 00:12:00.554 } 00:12:00.554 ] 00:12:00.554 }' 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.554 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.815 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8c607d21-94ce-426d-b798-da5850308e6c 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.075 [2024-11-20 10:35:04.346827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:01.075 [2024-11-20 10:35:04.347159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:01.075 [2024-11-20 10:35:04.347175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:01.075 [2024-11-20 10:35:04.347473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:01.075 [2024-11-20 10:35:04.347621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:01.075 [2024-11-20 10:35:04.347634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:01.075 NewBaseBdev 00:12:01.075 [2024-11-20 10:35:04.347774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.075 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.076 [ 00:12:01.076 { 00:12:01.076 "name": "NewBaseBdev", 00:12:01.076 "aliases": [ 00:12:01.076 "8c607d21-94ce-426d-b798-da5850308e6c" 00:12:01.076 ], 00:12:01.076 "product_name": "Malloc disk", 00:12:01.076 "block_size": 512, 00:12:01.076 "num_blocks": 65536, 00:12:01.076 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:12:01.076 "assigned_rate_limits": { 00:12:01.076 "rw_ios_per_sec": 0, 00:12:01.076 "rw_mbytes_per_sec": 0, 00:12:01.076 "r_mbytes_per_sec": 0, 00:12:01.076 "w_mbytes_per_sec": 0 00:12:01.076 }, 00:12:01.076 "claimed": true, 00:12:01.076 "claim_type": "exclusive_write", 00:12:01.076 "zoned": false, 00:12:01.076 "supported_io_types": { 00:12:01.076 "read": true, 00:12:01.076 "write": true, 00:12:01.076 "unmap": true, 00:12:01.076 "flush": true, 00:12:01.076 "reset": true, 00:12:01.076 "nvme_admin": false, 00:12:01.076 "nvme_io": false, 00:12:01.076 "nvme_io_md": false, 00:12:01.076 "write_zeroes": true, 00:12:01.076 "zcopy": true, 00:12:01.076 "get_zone_info": false, 00:12:01.076 "zone_management": false, 00:12:01.076 "zone_append": false, 00:12:01.076 "compare": false, 00:12:01.076 "compare_and_write": false, 00:12:01.076 "abort": true, 00:12:01.076 "seek_hole": false, 00:12:01.076 "seek_data": false, 00:12:01.076 "copy": true, 00:12:01.076 "nvme_iov_md": false 00:12:01.076 }, 00:12:01.076 "memory_domains": [ 00:12:01.076 { 00:12:01.076 "dma_device_id": "system", 00:12:01.076 "dma_device_type": 1 00:12:01.076 }, 00:12:01.076 { 00:12:01.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.076 "dma_device_type": 2 00:12:01.076 } 00:12:01.076 ], 00:12:01.076 "driver_specific": {} 00:12:01.076 } 00:12:01.076 ] 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.076 "name": "Existed_Raid", 00:12:01.076 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:12:01.076 "strip_size_kb": 64, 00:12:01.076 "state": "online", 00:12:01.076 "raid_level": "concat", 00:12:01.076 "superblock": true, 00:12:01.076 "num_base_bdevs": 4, 00:12:01.076 "num_base_bdevs_discovered": 4, 00:12:01.076 "num_base_bdevs_operational": 4, 00:12:01.076 "base_bdevs_list": [ 00:12:01.076 { 00:12:01.076 "name": "NewBaseBdev", 00:12:01.076 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:12:01.076 "is_configured": true, 00:12:01.076 "data_offset": 2048, 00:12:01.076 "data_size": 63488 00:12:01.076 }, 00:12:01.076 { 00:12:01.076 "name": "BaseBdev2", 00:12:01.076 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:12:01.076 "is_configured": true, 00:12:01.076 "data_offset": 2048, 00:12:01.076 "data_size": 63488 00:12:01.076 }, 00:12:01.076 { 00:12:01.076 "name": "BaseBdev3", 00:12:01.076 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:12:01.076 "is_configured": true, 00:12:01.076 "data_offset": 2048, 00:12:01.076 "data_size": 63488 00:12:01.076 }, 00:12:01.076 { 00:12:01.076 "name": "BaseBdev4", 00:12:01.076 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:12:01.076 "is_configured": true, 00:12:01.076 "data_offset": 2048, 00:12:01.076 "data_size": 63488 00:12:01.076 } 00:12:01.076 ] 00:12:01.076 }' 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.076 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 [2024-11-20 10:35:04.846461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.682 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:01.682 "name": "Existed_Raid", 00:12:01.682 "aliases": [ 00:12:01.682 "e66d2ace-db0d-45b8-bf48-223fbf8a2098" 00:12:01.682 ], 00:12:01.682 "product_name": "Raid Volume", 00:12:01.682 "block_size": 512, 00:12:01.682 "num_blocks": 253952, 00:12:01.682 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:12:01.682 "assigned_rate_limits": { 00:12:01.682 "rw_ios_per_sec": 0, 00:12:01.682 "rw_mbytes_per_sec": 0, 00:12:01.682 "r_mbytes_per_sec": 0, 00:12:01.682 "w_mbytes_per_sec": 0 00:12:01.682 }, 00:12:01.682 "claimed": false, 00:12:01.682 "zoned": false, 00:12:01.682 "supported_io_types": { 00:12:01.682 "read": true, 00:12:01.682 "write": true, 00:12:01.682 "unmap": true, 00:12:01.682 "flush": true, 00:12:01.682 "reset": true, 00:12:01.682 "nvme_admin": false, 00:12:01.682 "nvme_io": false, 00:12:01.682 "nvme_io_md": false, 00:12:01.682 "write_zeroes": true, 00:12:01.682 "zcopy": false, 00:12:01.682 "get_zone_info": false, 00:12:01.682 "zone_management": false, 00:12:01.682 "zone_append": false, 00:12:01.682 "compare": false, 00:12:01.682 "compare_and_write": false, 00:12:01.682 "abort": false, 00:12:01.682 "seek_hole": false, 00:12:01.682 "seek_data": false, 00:12:01.682 "copy": false, 00:12:01.682 "nvme_iov_md": false 00:12:01.682 }, 00:12:01.682 "memory_domains": [ 00:12:01.682 { 00:12:01.682 "dma_device_id": "system", 00:12:01.682 "dma_device_type": 1 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.682 "dma_device_type": 2 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "system", 00:12:01.682 "dma_device_type": 1 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.682 "dma_device_type": 2 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "system", 00:12:01.682 "dma_device_type": 1 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.682 "dma_device_type": 2 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "system", 00:12:01.682 "dma_device_type": 1 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.682 "dma_device_type": 2 00:12:01.682 } 00:12:01.682 ], 00:12:01.682 "driver_specific": { 00:12:01.682 "raid": { 00:12:01.682 "uuid": "e66d2ace-db0d-45b8-bf48-223fbf8a2098", 00:12:01.682 "strip_size_kb": 64, 00:12:01.682 "state": "online", 00:12:01.682 "raid_level": "concat", 00:12:01.682 "superblock": true, 00:12:01.682 "num_base_bdevs": 4, 00:12:01.682 "num_base_bdevs_discovered": 4, 00:12:01.682 "num_base_bdevs_operational": 4, 00:12:01.682 "base_bdevs_list": [ 00:12:01.682 { 00:12:01.682 "name": "NewBaseBdev", 00:12:01.682 "uuid": "8c607d21-94ce-426d-b798-da5850308e6c", 00:12:01.682 "is_configured": true, 00:12:01.682 "data_offset": 2048, 00:12:01.682 "data_size": 63488 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "name": "BaseBdev2", 00:12:01.682 "uuid": "631f0b46-7883-4a7b-922d-aadd3f446df2", 00:12:01.682 "is_configured": true, 00:12:01.682 "data_offset": 2048, 00:12:01.682 "data_size": 63488 00:12:01.682 }, 00:12:01.683 { 00:12:01.683 "name": "BaseBdev3", 00:12:01.683 "uuid": "759f8e57-cbbf-4a84-9e15-5a54559579dc", 00:12:01.683 "is_configured": true, 00:12:01.683 "data_offset": 2048, 00:12:01.683 "data_size": 63488 00:12:01.683 }, 00:12:01.683 { 00:12:01.683 "name": "BaseBdev4", 00:12:01.683 "uuid": "da62aea7-b90f-4c59-8695-7d3d5e203f9d", 00:12:01.683 "is_configured": true, 00:12:01.683 "data_offset": 2048, 00:12:01.683 "data_size": 63488 00:12:01.683 } 00:12:01.683 ] 00:12:01.683 } 00:12:01.683 } 00:12:01.683 }' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:01.683 BaseBdev2 00:12:01.683 BaseBdev3 00:12:01.683 BaseBdev4' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:01.683 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 [2024-11-20 10:35:05.145495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.683 [2024-11-20 10:35:05.145525] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.683 [2024-11-20 10:35:05.145602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.683 [2024-11-20 10:35:05.145672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.683 [2024-11-20 10:35:05.145682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72142 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72142 ']' 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72142 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:01.683 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.943 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72142 00:12:01.943 killing process with pid 72142 00:12:01.943 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.943 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.943 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72142' 00:12:01.943 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72142 00:12:01.943 [2024-11-20 10:35:05.183318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.943 10:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72142 00:12:02.203 [2024-11-20 10:35:05.581963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.582 10:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:03.582 00:12:03.582 real 0m11.884s 00:12:03.582 user 0m18.968s 00:12:03.582 sys 0m2.055s 00:12:03.582 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.582 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.582 ************************************ 00:12:03.582 END TEST raid_state_function_test_sb 00:12:03.582 ************************************ 00:12:03.582 10:35:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:03.582 10:35:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.582 10:35:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.582 10:35:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.582 ************************************ 00:12:03.582 START TEST raid_superblock_test 00:12:03.582 ************************************ 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72812 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72812 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72812 ']' 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.582 10:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.582 [2024-11-20 10:35:06.888869] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:03.582 [2024-11-20 10:35:06.889084] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72812 ] 00:12:03.842 [2024-11-20 10:35:07.064107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.842 [2024-11-20 10:35:07.181533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.102 [2024-11-20 10:35:07.391613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.102 [2024-11-20 10:35:07.391750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.362 malloc1 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.362 [2024-11-20 10:35:07.788448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.362 [2024-11-20 10:35:07.788560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.362 [2024-11-20 10:35:07.788614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:04.362 [2024-11-20 10:35:07.788645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.362 [2024-11-20 10:35:07.790830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.362 [2024-11-20 10:35:07.790902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.362 pt1 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.362 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 malloc2 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 [2024-11-20 10:35:07.848689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.623 [2024-11-20 10:35:07.848750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.623 [2024-11-20 10:35:07.848773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:04.623 [2024-11-20 10:35:07.848782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.623 [2024-11-20 10:35:07.851012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.623 [2024-11-20 10:35:07.851048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.623 pt2 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 malloc3 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 [2024-11-20 10:35:07.916560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.623 [2024-11-20 10:35:07.916687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.623 [2024-11-20 10:35:07.916733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:04.623 [2024-11-20 10:35:07.916786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.623 [2024-11-20 10:35:07.918973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.623 [2024-11-20 10:35:07.919048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.623 pt3 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 malloc4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 [2024-11-20 10:35:07.979393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:04.623 [2024-11-20 10:35:07.979497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.623 [2024-11-20 10:35:07.979538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:04.623 [2024-11-20 10:35:07.979582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.623 [2024-11-20 10:35:07.981934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.623 [2024-11-20 10:35:07.982022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:04.623 pt4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 [2024-11-20 10:35:07.991383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.623 [2024-11-20 10:35:07.993330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.623 [2024-11-20 10:35:07.993425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.623 [2024-11-20 10:35:07.993498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:04.623 [2024-11-20 10:35:07.993706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:04.623 [2024-11-20 10:35:07.993726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:04.623 [2024-11-20 10:35:07.994008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.623 [2024-11-20 10:35:07.994183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:04.623 [2024-11-20 10:35:07.994196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:04.623 [2024-11-20 10:35:07.994351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.623 10:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.623 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.623 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.623 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.623 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.623 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.623 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.623 "name": "raid_bdev1", 00:12:04.623 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:04.623 "strip_size_kb": 64, 00:12:04.623 "state": "online", 00:12:04.623 "raid_level": "concat", 00:12:04.623 "superblock": true, 00:12:04.623 "num_base_bdevs": 4, 00:12:04.623 "num_base_bdevs_discovered": 4, 00:12:04.623 "num_base_bdevs_operational": 4, 00:12:04.623 "base_bdevs_list": [ 00:12:04.623 { 00:12:04.623 "name": "pt1", 00:12:04.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.623 "is_configured": true, 00:12:04.623 "data_offset": 2048, 00:12:04.623 "data_size": 63488 00:12:04.623 }, 00:12:04.623 { 00:12:04.624 "name": "pt2", 00:12:04.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.624 "is_configured": true, 00:12:04.624 "data_offset": 2048, 00:12:04.624 "data_size": 63488 00:12:04.624 }, 00:12:04.624 { 00:12:04.624 "name": "pt3", 00:12:04.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.624 "is_configured": true, 00:12:04.624 "data_offset": 2048, 00:12:04.624 "data_size": 63488 00:12:04.624 }, 00:12:04.624 { 00:12:04.624 "name": "pt4", 00:12:04.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.624 "is_configured": true, 00:12:04.624 "data_offset": 2048, 00:12:04.624 "data_size": 63488 00:12:04.624 } 00:12:04.624 ] 00:12:04.624 }' 00:12:04.624 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.624 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.194 [2024-11-20 10:35:08.434885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.194 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.194 "name": "raid_bdev1", 00:12:05.194 "aliases": [ 00:12:05.194 "95c29b4c-09a4-45da-8d66-7be81f474925" 00:12:05.194 ], 00:12:05.194 "product_name": "Raid Volume", 00:12:05.194 "block_size": 512, 00:12:05.194 "num_blocks": 253952, 00:12:05.194 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:05.194 "assigned_rate_limits": { 00:12:05.194 "rw_ios_per_sec": 0, 00:12:05.194 "rw_mbytes_per_sec": 0, 00:12:05.194 "r_mbytes_per_sec": 0, 00:12:05.194 "w_mbytes_per_sec": 0 00:12:05.194 }, 00:12:05.194 "claimed": false, 00:12:05.194 "zoned": false, 00:12:05.194 "supported_io_types": { 00:12:05.194 "read": true, 00:12:05.194 "write": true, 00:12:05.194 "unmap": true, 00:12:05.194 "flush": true, 00:12:05.194 "reset": true, 00:12:05.194 "nvme_admin": false, 00:12:05.194 "nvme_io": false, 00:12:05.194 "nvme_io_md": false, 00:12:05.194 "write_zeroes": true, 00:12:05.194 "zcopy": false, 00:12:05.194 "get_zone_info": false, 00:12:05.194 "zone_management": false, 00:12:05.194 "zone_append": false, 00:12:05.194 "compare": false, 00:12:05.195 "compare_and_write": false, 00:12:05.195 "abort": false, 00:12:05.195 "seek_hole": false, 00:12:05.195 "seek_data": false, 00:12:05.195 "copy": false, 00:12:05.195 "nvme_iov_md": false 00:12:05.195 }, 00:12:05.195 "memory_domains": [ 00:12:05.195 { 00:12:05.195 "dma_device_id": "system", 00:12:05.195 "dma_device_type": 1 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.195 "dma_device_type": 2 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "system", 00:12:05.195 "dma_device_type": 1 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.195 "dma_device_type": 2 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "system", 00:12:05.195 "dma_device_type": 1 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.195 "dma_device_type": 2 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "system", 00:12:05.195 "dma_device_type": 1 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.195 "dma_device_type": 2 00:12:05.195 } 00:12:05.195 ], 00:12:05.195 "driver_specific": { 00:12:05.195 "raid": { 00:12:05.195 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:05.195 "strip_size_kb": 64, 00:12:05.195 "state": "online", 00:12:05.195 "raid_level": "concat", 00:12:05.195 "superblock": true, 00:12:05.195 "num_base_bdevs": 4, 00:12:05.195 "num_base_bdevs_discovered": 4, 00:12:05.195 "num_base_bdevs_operational": 4, 00:12:05.195 "base_bdevs_list": [ 00:12:05.195 { 00:12:05.195 "name": "pt1", 00:12:05.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.195 "is_configured": true, 00:12:05.195 "data_offset": 2048, 00:12:05.195 "data_size": 63488 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "name": "pt2", 00:12:05.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.195 "is_configured": true, 00:12:05.195 "data_offset": 2048, 00:12:05.195 "data_size": 63488 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "name": "pt3", 00:12:05.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.195 "is_configured": true, 00:12:05.195 "data_offset": 2048, 00:12:05.195 "data_size": 63488 00:12:05.195 }, 00:12:05.195 { 00:12:05.195 "name": "pt4", 00:12:05.195 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.195 "is_configured": true, 00:12:05.195 "data_offset": 2048, 00:12:05.195 "data_size": 63488 00:12:05.195 } 00:12:05.195 ] 00:12:05.195 } 00:12:05.195 } 00:12:05.195 }' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:05.195 pt2 00:12:05.195 pt3 00:12:05.195 pt4' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.195 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:05.456 [2024-11-20 10:35:08.750350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=95c29b4c-09a4-45da-8d66-7be81f474925 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 95c29b4c-09a4-45da-8d66-7be81f474925 ']' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 [2024-11-20 10:35:08.793964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.456 [2024-11-20 10:35:08.794042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.456 [2024-11-20 10:35:08.794153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.456 [2024-11-20 10:35:08.794240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.456 [2024-11-20 10:35:08.794292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:05.456 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.717 [2024-11-20 10:35:08.941748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:05.717 [2024-11-20 10:35:08.943745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:05.717 [2024-11-20 10:35:08.943801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:05.717 [2024-11-20 10:35:08.943839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:05.717 [2024-11-20 10:35:08.943897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:05.717 [2024-11-20 10:35:08.943954] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:05.717 [2024-11-20 10:35:08.943976] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:05.717 [2024-11-20 10:35:08.943997] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:05.717 [2024-11-20 10:35:08.944012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.717 [2024-11-20 10:35:08.944024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:05.717 request: 00:12:05.717 { 00:12:05.717 "name": "raid_bdev1", 00:12:05.717 "raid_level": "concat", 00:12:05.717 "base_bdevs": [ 00:12:05.717 "malloc1", 00:12:05.717 "malloc2", 00:12:05.717 "malloc3", 00:12:05.717 "malloc4" 00:12:05.717 ], 00:12:05.717 "strip_size_kb": 64, 00:12:05.717 "superblock": false, 00:12:05.717 "method": "bdev_raid_create", 00:12:05.717 "req_id": 1 00:12:05.717 } 00:12:05.717 Got JSON-RPC error response 00:12:05.717 response: 00:12:05.717 { 00:12:05.717 "code": -17, 00:12:05.717 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:05.717 } 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.717 10:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.717 [2024-11-20 10:35:09.009621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:05.717 [2024-11-20 10:35:09.009739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.717 [2024-11-20 10:35:09.009786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.717 [2024-11-20 10:35:09.009826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.717 [2024-11-20 10:35:09.012225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.717 [2024-11-20 10:35:09.012317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:05.717 [2024-11-20 10:35:09.012447] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:05.717 [2024-11-20 10:35:09.012559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.717 pt1 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.717 "name": "raid_bdev1", 00:12:05.717 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:05.717 "strip_size_kb": 64, 00:12:05.717 "state": "configuring", 00:12:05.717 "raid_level": "concat", 00:12:05.717 "superblock": true, 00:12:05.717 "num_base_bdevs": 4, 00:12:05.717 "num_base_bdevs_discovered": 1, 00:12:05.717 "num_base_bdevs_operational": 4, 00:12:05.717 "base_bdevs_list": [ 00:12:05.717 { 00:12:05.717 "name": "pt1", 00:12:05.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.717 "is_configured": true, 00:12:05.717 "data_offset": 2048, 00:12:05.717 "data_size": 63488 00:12:05.717 }, 00:12:05.717 { 00:12:05.717 "name": null, 00:12:05.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.717 "is_configured": false, 00:12:05.717 "data_offset": 2048, 00:12:05.717 "data_size": 63488 00:12:05.717 }, 00:12:05.717 { 00:12:05.717 "name": null, 00:12:05.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.717 "is_configured": false, 00:12:05.717 "data_offset": 2048, 00:12:05.717 "data_size": 63488 00:12:05.717 }, 00:12:05.717 { 00:12:05.717 "name": null, 00:12:05.717 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.717 "is_configured": false, 00:12:05.717 "data_offset": 2048, 00:12:05.717 "data_size": 63488 00:12:05.717 } 00:12:05.717 ] 00:12:05.717 }' 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.717 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.977 [2024-11-20 10:35:09.432929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.977 [2024-11-20 10:35:09.433005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.977 [2024-11-20 10:35:09.433024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:05.977 [2024-11-20 10:35:09.433035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.977 [2024-11-20 10:35:09.433505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.977 [2024-11-20 10:35:09.433527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.977 [2024-11-20 10:35:09.433607] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.977 [2024-11-20 10:35:09.433634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.977 pt2 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.977 [2024-11-20 10:35:09.444924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.977 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.238 "name": "raid_bdev1", 00:12:06.238 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:06.238 "strip_size_kb": 64, 00:12:06.238 "state": "configuring", 00:12:06.238 "raid_level": "concat", 00:12:06.238 "superblock": true, 00:12:06.238 "num_base_bdevs": 4, 00:12:06.238 "num_base_bdevs_discovered": 1, 00:12:06.238 "num_base_bdevs_operational": 4, 00:12:06.238 "base_bdevs_list": [ 00:12:06.238 { 00:12:06.238 "name": "pt1", 00:12:06.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.238 "is_configured": true, 00:12:06.238 "data_offset": 2048, 00:12:06.238 "data_size": 63488 00:12:06.238 }, 00:12:06.238 { 00:12:06.238 "name": null, 00:12:06.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.238 "is_configured": false, 00:12:06.238 "data_offset": 0, 00:12:06.238 "data_size": 63488 00:12:06.238 }, 00:12:06.238 { 00:12:06.238 "name": null, 00:12:06.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.238 "is_configured": false, 00:12:06.238 "data_offset": 2048, 00:12:06.238 "data_size": 63488 00:12:06.238 }, 00:12:06.238 { 00:12:06.238 "name": null, 00:12:06.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.238 "is_configured": false, 00:12:06.238 "data_offset": 2048, 00:12:06.238 "data_size": 63488 00:12:06.238 } 00:12:06.238 ] 00:12:06.238 }' 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.238 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 [2024-11-20 10:35:09.872187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.499 [2024-11-20 10:35:09.872315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.499 [2024-11-20 10:35:09.872387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:06.499 [2024-11-20 10:35:09.872429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.499 [2024-11-20 10:35:09.872910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.499 [2024-11-20 10:35:09.872969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.499 [2024-11-20 10:35:09.873084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:06.499 [2024-11-20 10:35:09.873135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.499 pt2 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 [2024-11-20 10:35:09.884130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.499 [2024-11-20 10:35:09.884221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.499 [2024-11-20 10:35:09.884262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:06.499 [2024-11-20 10:35:09.884292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.499 [2024-11-20 10:35:09.884685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.499 [2024-11-20 10:35:09.884740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.499 [2024-11-20 10:35:09.884840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:06.499 [2024-11-20 10:35:09.884888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.499 pt3 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.499 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 [2024-11-20 10:35:09.896082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:06.499 [2024-11-20 10:35:09.896132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.499 [2024-11-20 10:35:09.896168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:06.499 [2024-11-20 10:35:09.896176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.499 [2024-11-20 10:35:09.896569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.499 [2024-11-20 10:35:09.896586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:06.499 [2024-11-20 10:35:09.896650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:06.499 [2024-11-20 10:35:09.896668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:06.499 [2024-11-20 10:35:09.896806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:06.499 [2024-11-20 10:35:09.896820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:06.499 [2024-11-20 10:35:09.897045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:06.499 [2024-11-20 10:35:09.897183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:06.500 [2024-11-20 10:35:09.897202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:06.500 [2024-11-20 10:35:09.897330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.500 pt4 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.500 "name": "raid_bdev1", 00:12:06.500 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:06.500 "strip_size_kb": 64, 00:12:06.500 "state": "online", 00:12:06.500 "raid_level": "concat", 00:12:06.500 "superblock": true, 00:12:06.500 "num_base_bdevs": 4, 00:12:06.500 "num_base_bdevs_discovered": 4, 00:12:06.500 "num_base_bdevs_operational": 4, 00:12:06.500 "base_bdevs_list": [ 00:12:06.500 { 00:12:06.500 "name": "pt1", 00:12:06.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.500 "is_configured": true, 00:12:06.500 "data_offset": 2048, 00:12:06.500 "data_size": 63488 00:12:06.500 }, 00:12:06.500 { 00:12:06.500 "name": "pt2", 00:12:06.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.500 "is_configured": true, 00:12:06.500 "data_offset": 2048, 00:12:06.500 "data_size": 63488 00:12:06.500 }, 00:12:06.500 { 00:12:06.500 "name": "pt3", 00:12:06.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.500 "is_configured": true, 00:12:06.500 "data_offset": 2048, 00:12:06.500 "data_size": 63488 00:12:06.500 }, 00:12:06.500 { 00:12:06.500 "name": "pt4", 00:12:06.500 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.500 "is_configured": true, 00:12:06.500 "data_offset": 2048, 00:12:06.500 "data_size": 63488 00:12:06.500 } 00:12:06.500 ] 00:12:06.500 }' 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.500 10:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.070 [2024-11-20 10:35:10.379644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.070 "name": "raid_bdev1", 00:12:07.070 "aliases": [ 00:12:07.070 "95c29b4c-09a4-45da-8d66-7be81f474925" 00:12:07.070 ], 00:12:07.070 "product_name": "Raid Volume", 00:12:07.070 "block_size": 512, 00:12:07.070 "num_blocks": 253952, 00:12:07.070 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:07.070 "assigned_rate_limits": { 00:12:07.070 "rw_ios_per_sec": 0, 00:12:07.070 "rw_mbytes_per_sec": 0, 00:12:07.070 "r_mbytes_per_sec": 0, 00:12:07.070 "w_mbytes_per_sec": 0 00:12:07.070 }, 00:12:07.070 "claimed": false, 00:12:07.070 "zoned": false, 00:12:07.070 "supported_io_types": { 00:12:07.070 "read": true, 00:12:07.070 "write": true, 00:12:07.070 "unmap": true, 00:12:07.070 "flush": true, 00:12:07.070 "reset": true, 00:12:07.070 "nvme_admin": false, 00:12:07.070 "nvme_io": false, 00:12:07.070 "nvme_io_md": false, 00:12:07.070 "write_zeroes": true, 00:12:07.070 "zcopy": false, 00:12:07.070 "get_zone_info": false, 00:12:07.070 "zone_management": false, 00:12:07.070 "zone_append": false, 00:12:07.070 "compare": false, 00:12:07.070 "compare_and_write": false, 00:12:07.070 "abort": false, 00:12:07.070 "seek_hole": false, 00:12:07.070 "seek_data": false, 00:12:07.070 "copy": false, 00:12:07.070 "nvme_iov_md": false 00:12:07.070 }, 00:12:07.070 "memory_domains": [ 00:12:07.070 { 00:12:07.070 "dma_device_id": "system", 00:12:07.070 "dma_device_type": 1 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.070 "dma_device_type": 2 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "system", 00:12:07.070 "dma_device_type": 1 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.070 "dma_device_type": 2 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "system", 00:12:07.070 "dma_device_type": 1 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.070 "dma_device_type": 2 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "system", 00:12:07.070 "dma_device_type": 1 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.070 "dma_device_type": 2 00:12:07.070 } 00:12:07.070 ], 00:12:07.070 "driver_specific": { 00:12:07.070 "raid": { 00:12:07.070 "uuid": "95c29b4c-09a4-45da-8d66-7be81f474925", 00:12:07.070 "strip_size_kb": 64, 00:12:07.070 "state": "online", 00:12:07.070 "raid_level": "concat", 00:12:07.070 "superblock": true, 00:12:07.070 "num_base_bdevs": 4, 00:12:07.070 "num_base_bdevs_discovered": 4, 00:12:07.070 "num_base_bdevs_operational": 4, 00:12:07.070 "base_bdevs_list": [ 00:12:07.070 { 00:12:07.070 "name": "pt1", 00:12:07.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 2048, 00:12:07.070 "data_size": 63488 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "name": "pt2", 00:12:07.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 2048, 00:12:07.070 "data_size": 63488 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "name": "pt3", 00:12:07.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 2048, 00:12:07.070 "data_size": 63488 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "name": "pt4", 00:12:07.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 2048, 00:12:07.070 "data_size": 63488 00:12:07.070 } 00:12:07.070 ] 00:12:07.070 } 00:12:07.070 } 00:12:07.070 }' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:07.070 pt2 00:12:07.070 pt3 00:12:07.070 pt4' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.070 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.330 [2024-11-20 10:35:10.723029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 95c29b4c-09a4-45da-8d66-7be81f474925 '!=' 95c29b4c-09a4-45da-8d66-7be81f474925 ']' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72812 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72812 ']' 00:12:07.330 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72812 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72812 00:12:07.331 killing process with pid 72812 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72812' 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72812 00:12:07.331 [2024-11-20 10:35:10.804250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.331 [2024-11-20 10:35:10.804340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.331 10:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72812 00:12:07.331 [2024-11-20 10:35:10.804449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.331 [2024-11-20 10:35:10.804461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:07.898 [2024-11-20 10:35:11.217854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.275 10:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:09.275 00:12:09.275 real 0m5.531s 00:12:09.275 user 0m7.884s 00:12:09.275 sys 0m0.960s 00:12:09.275 10:35:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.275 10:35:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.275 ************************************ 00:12:09.275 END TEST raid_superblock_test 00:12:09.275 ************************************ 00:12:09.275 10:35:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:09.275 10:35:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:09.275 10:35:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.275 10:35:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.275 ************************************ 00:12:09.275 START TEST raid_read_error_test 00:12:09.275 ************************************ 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LKatejA9zW 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73078 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73078 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:09.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73078 ']' 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.275 10:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.275 [2024-11-20 10:35:12.496027] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:09.275 [2024-11-20 10:35:12.496146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73078 ] 00:12:09.275 [2024-11-20 10:35:12.669419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.536 [2024-11-20 10:35:12.784778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.536 [2024-11-20 10:35:12.988027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.536 [2024-11-20 10:35:12.988060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 BaseBdev1_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 true 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 [2024-11-20 10:35:13.406864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.106 [2024-11-20 10:35:13.406985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.106 [2024-11-20 10:35:13.407014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.106 [2024-11-20 10:35:13.407028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.106 [2024-11-20 10:35:13.409387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.106 [2024-11-20 10:35:13.409424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.106 BaseBdev1 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 BaseBdev2_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 true 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 [2024-11-20 10:35:13.475633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:10.106 [2024-11-20 10:35:13.475755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.106 [2024-11-20 10:35:13.475780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:10.106 [2024-11-20 10:35:13.475792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.106 [2024-11-20 10:35:13.478123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.106 [2024-11-20 10:35:13.478161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.106 BaseBdev2 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 BaseBdev3_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 true 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.106 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.106 [2024-11-20 10:35:13.554381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:10.106 [2024-11-20 10:35:13.554433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.107 [2024-11-20 10:35:13.554450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:10.107 [2024-11-20 10:35:13.554460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.107 [2024-11-20 10:35:13.556590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.107 [2024-11-20 10:35:13.556668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:10.107 BaseBdev3 00:12:10.107 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.107 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.107 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:10.107 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.107 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.367 BaseBdev4_malloc 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.367 true 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.367 [2024-11-20 10:35:13.623935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:10.367 [2024-11-20 10:35:13.623991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.367 [2024-11-20 10:35:13.624009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:10.367 [2024-11-20 10:35:13.624019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.367 [2024-11-20 10:35:13.626144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.367 [2024-11-20 10:35:13.626185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:10.367 BaseBdev4 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.367 [2024-11-20 10:35:13.635982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.367 [2024-11-20 10:35:13.637929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.367 [2024-11-20 10:35:13.638007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.367 [2024-11-20 10:35:13.638073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.367 [2024-11-20 10:35:13.638304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:10.367 [2024-11-20 10:35:13.638318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:10.367 [2024-11-20 10:35:13.638591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:10.367 [2024-11-20 10:35:13.638748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:10.367 [2024-11-20 10:35:13.638766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:10.367 [2024-11-20 10:35:13.638920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.367 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.367 "name": "raid_bdev1", 00:12:10.367 "uuid": "7c1fc978-236c-40da-b885-a8b16b92488b", 00:12:10.367 "strip_size_kb": 64, 00:12:10.367 "state": "online", 00:12:10.367 "raid_level": "concat", 00:12:10.367 "superblock": true, 00:12:10.367 "num_base_bdevs": 4, 00:12:10.367 "num_base_bdevs_discovered": 4, 00:12:10.367 "num_base_bdevs_operational": 4, 00:12:10.367 "base_bdevs_list": [ 00:12:10.367 { 00:12:10.367 "name": "BaseBdev1", 00:12:10.367 "uuid": "58630e89-3eb5-589e-832b-17287ef8ee3d", 00:12:10.367 "is_configured": true, 00:12:10.367 "data_offset": 2048, 00:12:10.367 "data_size": 63488 00:12:10.367 }, 00:12:10.368 { 00:12:10.368 "name": "BaseBdev2", 00:12:10.368 "uuid": "f00b9673-61a8-5748-9dab-d6d9623d64fb", 00:12:10.368 "is_configured": true, 00:12:10.368 "data_offset": 2048, 00:12:10.368 "data_size": 63488 00:12:10.368 }, 00:12:10.368 { 00:12:10.368 "name": "BaseBdev3", 00:12:10.368 "uuid": "ba044a2f-9fd9-5f52-9a09-d58c02ada90d", 00:12:10.368 "is_configured": true, 00:12:10.368 "data_offset": 2048, 00:12:10.368 "data_size": 63488 00:12:10.368 }, 00:12:10.368 { 00:12:10.368 "name": "BaseBdev4", 00:12:10.368 "uuid": "23fa273b-e487-5b92-a3cb-3e892a950e1c", 00:12:10.368 "is_configured": true, 00:12:10.368 "data_offset": 2048, 00:12:10.368 "data_size": 63488 00:12:10.368 } 00:12:10.368 ] 00:12:10.368 }' 00:12:10.368 10:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.368 10:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.627 10:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:10.627 10:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:10.887 [2024-11-20 10:35:14.164686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:11.823 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.824 "name": "raid_bdev1", 00:12:11.824 "uuid": "7c1fc978-236c-40da-b885-a8b16b92488b", 00:12:11.824 "strip_size_kb": 64, 00:12:11.824 "state": "online", 00:12:11.824 "raid_level": "concat", 00:12:11.824 "superblock": true, 00:12:11.824 "num_base_bdevs": 4, 00:12:11.824 "num_base_bdevs_discovered": 4, 00:12:11.824 "num_base_bdevs_operational": 4, 00:12:11.824 "base_bdevs_list": [ 00:12:11.824 { 00:12:11.824 "name": "BaseBdev1", 00:12:11.824 "uuid": "58630e89-3eb5-589e-832b-17287ef8ee3d", 00:12:11.824 "is_configured": true, 00:12:11.824 "data_offset": 2048, 00:12:11.824 "data_size": 63488 00:12:11.824 }, 00:12:11.824 { 00:12:11.824 "name": "BaseBdev2", 00:12:11.824 "uuid": "f00b9673-61a8-5748-9dab-d6d9623d64fb", 00:12:11.824 "is_configured": true, 00:12:11.824 "data_offset": 2048, 00:12:11.824 "data_size": 63488 00:12:11.824 }, 00:12:11.824 { 00:12:11.824 "name": "BaseBdev3", 00:12:11.824 "uuid": "ba044a2f-9fd9-5f52-9a09-d58c02ada90d", 00:12:11.824 "is_configured": true, 00:12:11.824 "data_offset": 2048, 00:12:11.824 "data_size": 63488 00:12:11.824 }, 00:12:11.824 { 00:12:11.824 "name": "BaseBdev4", 00:12:11.824 "uuid": "23fa273b-e487-5b92-a3cb-3e892a950e1c", 00:12:11.824 "is_configured": true, 00:12:11.824 "data_offset": 2048, 00:12:11.824 "data_size": 63488 00:12:11.824 } 00:12:11.824 ] 00:12:11.824 }' 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.824 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.083 [2024-11-20 10:35:15.513132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.083 [2024-11-20 10:35:15.513241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.083 [2024-11-20 10:35:15.516127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.083 [2024-11-20 10:35:15.516185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.083 [2024-11-20 10:35:15.516227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.083 [2024-11-20 10:35:15.516242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:12.083 { 00:12:12.083 "results": [ 00:12:12.083 { 00:12:12.083 "job": "raid_bdev1", 00:12:12.083 "core_mask": "0x1", 00:12:12.083 "workload": "randrw", 00:12:12.083 "percentage": 50, 00:12:12.083 "status": "finished", 00:12:12.083 "queue_depth": 1, 00:12:12.083 "io_size": 131072, 00:12:12.083 "runtime": 1.349086, 00:12:12.083 "iops": 15142.844859408518, 00:12:12.083 "mibps": 1892.8556074260648, 00:12:12.083 "io_failed": 1, 00:12:12.083 "io_timeout": 0, 00:12:12.083 "avg_latency_us": 91.73833189055397, 00:12:12.083 "min_latency_us": 27.50043668122271, 00:12:12.083 "max_latency_us": 1738.564192139738 00:12:12.083 } 00:12:12.083 ], 00:12:12.083 "core_count": 1 00:12:12.083 } 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73078 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73078 ']' 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73078 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73078 00:12:12.083 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.084 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.084 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73078' 00:12:12.084 killing process with pid 73078 00:12:12.342 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73078 00:12:12.342 [2024-11-20 10:35:15.560794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.342 10:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73078 00:12:12.625 [2024-11-20 10:35:15.881720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LKatejA9zW 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:14.005 00:12:14.005 real 0m4.664s 00:12:14.005 user 0m5.457s 00:12:14.005 sys 0m0.604s 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.005 10:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.005 ************************************ 00:12:14.005 END TEST raid_read_error_test 00:12:14.005 ************************************ 00:12:14.005 10:35:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:14.005 10:35:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:14.005 10:35:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.005 10:35:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.005 ************************************ 00:12:14.005 START TEST raid_write_error_test 00:12:14.005 ************************************ 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UEPexKfzim 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73218 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73218 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73218 ']' 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.005 10:35:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.005 [2024-11-20 10:35:17.245039] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:14.005 [2024-11-20 10:35:17.245157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:12:14.005 [2024-11-20 10:35:17.418542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.270 [2024-11-20 10:35:17.538870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.530 [2024-11-20 10:35:17.748592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.530 [2024-11-20 10:35:17.748641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 BaseBdev1_malloc 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 true 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 [2024-11-20 10:35:18.155367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:14.790 [2024-11-20 10:35:18.155506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.790 [2024-11-20 10:35:18.155537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:14.790 [2024-11-20 10:35:18.155551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.790 [2024-11-20 10:35:18.157989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.790 [2024-11-20 10:35:18.158034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:14.790 BaseBdev1 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 BaseBdev2_malloc 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 true 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.791 [2024-11-20 10:35:18.225970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:14.791 [2024-11-20 10:35:18.226032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.791 [2024-11-20 10:35:18.226052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:14.791 [2024-11-20 10:35:18.226064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.791 [2024-11-20 10:35:18.228508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.791 [2024-11-20 10:35:18.228551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:14.791 BaseBdev2 00:12:14.791 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.791 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.791 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:14.791 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.791 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 BaseBdev3_malloc 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 true 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 [2024-11-20 10:35:18.313197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:15.050 [2024-11-20 10:35:18.313281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.050 [2024-11-20 10:35:18.313317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:15.050 [2024-11-20 10:35:18.313336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.050 [2024-11-20 10:35:18.316505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.050 [2024-11-20 10:35:18.316570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:15.050 BaseBdev3 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 BaseBdev4_malloc 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 true 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 [2024-11-20 10:35:18.384086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:15.050 [2024-11-20 10:35:18.384145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.050 [2024-11-20 10:35:18.384168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:15.050 [2024-11-20 10:35:18.384180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.050 [2024-11-20 10:35:18.386595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.050 [2024-11-20 10:35:18.386637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:15.050 BaseBdev4 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 [2024-11-20 10:35:18.396126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.050 [2024-11-20 10:35:18.398262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.050 [2024-11-20 10:35:18.398348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.050 [2024-11-20 10:35:18.398443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.050 [2024-11-20 10:35:18.398700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:15.050 [2024-11-20 10:35:18.398722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:15.050 [2024-11-20 10:35:18.398994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:15.050 [2024-11-20 10:35:18.399168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:15.050 [2024-11-20 10:35:18.399180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:15.050 [2024-11-20 10:35:18.399428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.050 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.050 "name": "raid_bdev1", 00:12:15.050 "uuid": "2471fa15-fce4-4d94-a26d-e26749f36982", 00:12:15.050 "strip_size_kb": 64, 00:12:15.050 "state": "online", 00:12:15.050 "raid_level": "concat", 00:12:15.050 "superblock": true, 00:12:15.050 "num_base_bdevs": 4, 00:12:15.050 "num_base_bdevs_discovered": 4, 00:12:15.050 "num_base_bdevs_operational": 4, 00:12:15.050 "base_bdevs_list": [ 00:12:15.051 { 00:12:15.051 "name": "BaseBdev1", 00:12:15.051 "uuid": "4ea32051-1055-5bdf-b69d-939b49a470fe", 00:12:15.051 "is_configured": true, 00:12:15.051 "data_offset": 2048, 00:12:15.051 "data_size": 63488 00:12:15.051 }, 00:12:15.051 { 00:12:15.051 "name": "BaseBdev2", 00:12:15.051 "uuid": "f2e52b9c-b2b7-533c-84aa-afca9bc1772c", 00:12:15.051 "is_configured": true, 00:12:15.051 "data_offset": 2048, 00:12:15.051 "data_size": 63488 00:12:15.051 }, 00:12:15.051 { 00:12:15.051 "name": "BaseBdev3", 00:12:15.051 "uuid": "b6454a78-2e50-5528-94f1-0d05583b6a79", 00:12:15.051 "is_configured": true, 00:12:15.051 "data_offset": 2048, 00:12:15.051 "data_size": 63488 00:12:15.051 }, 00:12:15.051 { 00:12:15.051 "name": "BaseBdev4", 00:12:15.051 "uuid": "606218cd-07b4-5fba-abcd-31ea4d436a6b", 00:12:15.051 "is_configured": true, 00:12:15.051 "data_offset": 2048, 00:12:15.051 "data_size": 63488 00:12:15.051 } 00:12:15.051 ] 00:12:15.051 }' 00:12:15.051 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.051 10:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.619 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:15.619 10:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:15.619 [2024-11-20 10:35:18.968663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.556 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.557 "name": "raid_bdev1", 00:12:16.557 "uuid": "2471fa15-fce4-4d94-a26d-e26749f36982", 00:12:16.557 "strip_size_kb": 64, 00:12:16.557 "state": "online", 00:12:16.557 "raid_level": "concat", 00:12:16.557 "superblock": true, 00:12:16.557 "num_base_bdevs": 4, 00:12:16.557 "num_base_bdevs_discovered": 4, 00:12:16.557 "num_base_bdevs_operational": 4, 00:12:16.557 "base_bdevs_list": [ 00:12:16.557 { 00:12:16.557 "name": "BaseBdev1", 00:12:16.557 "uuid": "4ea32051-1055-5bdf-b69d-939b49a470fe", 00:12:16.557 "is_configured": true, 00:12:16.557 "data_offset": 2048, 00:12:16.557 "data_size": 63488 00:12:16.557 }, 00:12:16.557 { 00:12:16.557 "name": "BaseBdev2", 00:12:16.557 "uuid": "f2e52b9c-b2b7-533c-84aa-afca9bc1772c", 00:12:16.557 "is_configured": true, 00:12:16.557 "data_offset": 2048, 00:12:16.557 "data_size": 63488 00:12:16.557 }, 00:12:16.557 { 00:12:16.557 "name": "BaseBdev3", 00:12:16.557 "uuid": "b6454a78-2e50-5528-94f1-0d05583b6a79", 00:12:16.557 "is_configured": true, 00:12:16.557 "data_offset": 2048, 00:12:16.557 "data_size": 63488 00:12:16.557 }, 00:12:16.557 { 00:12:16.557 "name": "BaseBdev4", 00:12:16.557 "uuid": "606218cd-07b4-5fba-abcd-31ea4d436a6b", 00:12:16.557 "is_configured": true, 00:12:16.557 "data_offset": 2048, 00:12:16.557 "data_size": 63488 00:12:16.557 } 00:12:16.557 ] 00:12:16.557 }' 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.557 10:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.124 [2024-11-20 10:35:20.381160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.124 [2024-11-20 10:35:20.381245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.124 [2024-11-20 10:35:20.383916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.124 [2024-11-20 10:35:20.384019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.124 [2024-11-20 10:35:20.384091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.124 [2024-11-20 10:35:20.384141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.124 { 00:12:17.124 "results": [ 00:12:17.124 { 00:12:17.124 "job": "raid_bdev1", 00:12:17.124 "core_mask": "0x1", 00:12:17.124 "workload": "randrw", 00:12:17.124 "percentage": 50, 00:12:17.124 "status": "finished", 00:12:17.124 "queue_depth": 1, 00:12:17.124 "io_size": 131072, 00:12:17.124 "runtime": 1.413406, 00:12:17.124 "iops": 15080.592554439418, 00:12:17.124 "mibps": 1885.0740693049272, 00:12:17.124 "io_failed": 1, 00:12:17.124 "io_timeout": 0, 00:12:17.124 "avg_latency_us": 92.17173232727575, 00:12:17.124 "min_latency_us": 26.382532751091702, 00:12:17.124 "max_latency_us": 1459.5353711790392 00:12:17.124 } 00:12:17.124 ], 00:12:17.124 "core_count": 1 00:12:17.124 } 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73218 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73218 ']' 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73218 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73218 00:12:17.124 killing process with pid 73218 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73218' 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73218 00:12:17.124 [2024-11-20 10:35:20.431455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.124 10:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73218 00:12:17.382 [2024-11-20 10:35:20.759829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.758 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UEPexKfzim 00:12:18.758 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:18.758 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:18.758 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:18.758 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:18.758 ************************************ 00:12:18.758 END TEST raid_write_error_test 00:12:18.758 ************************************ 00:12:18.759 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:18.759 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:18.759 10:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:18.759 00:12:18.759 real 0m4.870s 00:12:18.759 user 0m5.797s 00:12:18.759 sys 0m0.597s 00:12:18.759 10:35:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.759 10:35:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.759 10:35:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:18.759 10:35:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:18.759 10:35:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:18.759 10:35:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.759 10:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.759 ************************************ 00:12:18.759 START TEST raid_state_function_test 00:12:18.759 ************************************ 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73367 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73367' 00:12:18.759 Process raid pid: 73367 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73367 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73367 ']' 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.759 10:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.759 [2024-11-20 10:35:22.165634] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:18.759 [2024-11-20 10:35:22.165850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.044 [2024-11-20 10:35:22.347327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.044 [2024-11-20 10:35:22.470486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.302 [2024-11-20 10:35:22.681214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.302 [2024-11-20 10:35:22.681339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.561 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.561 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:19.561 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.561 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.561 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.561 [2024-11-20 10:35:23.034992] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.561 [2024-11-20 10:35:23.035103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.561 [2024-11-20 10:35:23.035120] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.561 [2024-11-20 10:35:23.035131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.561 [2024-11-20 10:35:23.035138] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.561 [2024-11-20 10:35:23.035148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.561 [2024-11-20 10:35:23.035155] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.561 [2024-11-20 10:35:23.035165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.819 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.820 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.820 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.820 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.820 "name": "Existed_Raid", 00:12:19.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.820 "strip_size_kb": 0, 00:12:19.820 "state": "configuring", 00:12:19.820 "raid_level": "raid1", 00:12:19.820 "superblock": false, 00:12:19.820 "num_base_bdevs": 4, 00:12:19.820 "num_base_bdevs_discovered": 0, 00:12:19.820 "num_base_bdevs_operational": 4, 00:12:19.820 "base_bdevs_list": [ 00:12:19.820 { 00:12:19.820 "name": "BaseBdev1", 00:12:19.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.820 "is_configured": false, 00:12:19.820 "data_offset": 0, 00:12:19.820 "data_size": 0 00:12:19.820 }, 00:12:19.820 { 00:12:19.820 "name": "BaseBdev2", 00:12:19.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.820 "is_configured": false, 00:12:19.820 "data_offset": 0, 00:12:19.820 "data_size": 0 00:12:19.820 }, 00:12:19.820 { 00:12:19.820 "name": "BaseBdev3", 00:12:19.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.820 "is_configured": false, 00:12:19.820 "data_offset": 0, 00:12:19.820 "data_size": 0 00:12:19.820 }, 00:12:19.820 { 00:12:19.820 "name": "BaseBdev4", 00:12:19.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.820 "is_configured": false, 00:12:19.820 "data_offset": 0, 00:12:19.820 "data_size": 0 00:12:19.820 } 00:12:19.820 ] 00:12:19.820 }' 00:12:19.820 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.820 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 [2024-11-20 10:35:23.462220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.080 [2024-11-20 10:35:23.462261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 [2024-11-20 10:35:23.470189] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.080 [2024-11-20 10:35:23.470278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.080 [2024-11-20 10:35:23.470291] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.080 [2024-11-20 10:35:23.470316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.080 [2024-11-20 10:35:23.470324] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.080 [2024-11-20 10:35:23.470333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.080 [2024-11-20 10:35:23.470340] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.080 [2024-11-20 10:35:23.470350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 [2024-11-20 10:35:23.515726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.080 BaseBdev1 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 [ 00:12:20.080 { 00:12:20.080 "name": "BaseBdev1", 00:12:20.080 "aliases": [ 00:12:20.080 "4196ee3a-7c1e-4e03-916d-09c40a4544c9" 00:12:20.080 ], 00:12:20.080 "product_name": "Malloc disk", 00:12:20.080 "block_size": 512, 00:12:20.080 "num_blocks": 65536, 00:12:20.080 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:20.080 "assigned_rate_limits": { 00:12:20.080 "rw_ios_per_sec": 0, 00:12:20.080 "rw_mbytes_per_sec": 0, 00:12:20.080 "r_mbytes_per_sec": 0, 00:12:20.080 "w_mbytes_per_sec": 0 00:12:20.080 }, 00:12:20.080 "claimed": true, 00:12:20.080 "claim_type": "exclusive_write", 00:12:20.080 "zoned": false, 00:12:20.080 "supported_io_types": { 00:12:20.080 "read": true, 00:12:20.080 "write": true, 00:12:20.080 "unmap": true, 00:12:20.080 "flush": true, 00:12:20.080 "reset": true, 00:12:20.080 "nvme_admin": false, 00:12:20.080 "nvme_io": false, 00:12:20.080 "nvme_io_md": false, 00:12:20.080 "write_zeroes": true, 00:12:20.080 "zcopy": true, 00:12:20.080 "get_zone_info": false, 00:12:20.080 "zone_management": false, 00:12:20.080 "zone_append": false, 00:12:20.080 "compare": false, 00:12:20.080 "compare_and_write": false, 00:12:20.080 "abort": true, 00:12:20.080 "seek_hole": false, 00:12:20.080 "seek_data": false, 00:12:20.080 "copy": true, 00:12:20.080 "nvme_iov_md": false 00:12:20.080 }, 00:12:20.080 "memory_domains": [ 00:12:20.080 { 00:12:20.080 "dma_device_id": "system", 00:12:20.080 "dma_device_type": 1 00:12:20.080 }, 00:12:20.080 { 00:12:20.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.080 "dma_device_type": 2 00:12:20.080 } 00:12:20.080 ], 00:12:20.080 "driver_specific": {} 00:12:20.080 } 00:12:20.080 ] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.080 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.081 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.081 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.081 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.340 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.340 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.340 "name": "Existed_Raid", 00:12:20.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.340 "strip_size_kb": 0, 00:12:20.340 "state": "configuring", 00:12:20.340 "raid_level": "raid1", 00:12:20.340 "superblock": false, 00:12:20.340 "num_base_bdevs": 4, 00:12:20.340 "num_base_bdevs_discovered": 1, 00:12:20.340 "num_base_bdevs_operational": 4, 00:12:20.340 "base_bdevs_list": [ 00:12:20.340 { 00:12:20.340 "name": "BaseBdev1", 00:12:20.340 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:20.340 "is_configured": true, 00:12:20.340 "data_offset": 0, 00:12:20.340 "data_size": 65536 00:12:20.340 }, 00:12:20.340 { 00:12:20.340 "name": "BaseBdev2", 00:12:20.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.340 "is_configured": false, 00:12:20.340 "data_offset": 0, 00:12:20.340 "data_size": 0 00:12:20.340 }, 00:12:20.340 { 00:12:20.340 "name": "BaseBdev3", 00:12:20.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.340 "is_configured": false, 00:12:20.340 "data_offset": 0, 00:12:20.340 "data_size": 0 00:12:20.340 }, 00:12:20.340 { 00:12:20.340 "name": "BaseBdev4", 00:12:20.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.340 "is_configured": false, 00:12:20.340 "data_offset": 0, 00:12:20.340 "data_size": 0 00:12:20.340 } 00:12:20.340 ] 00:12:20.340 }' 00:12:20.340 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.340 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 [2024-11-20 10:35:23.935048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.600 [2024-11-20 10:35:23.935153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 [2024-11-20 10:35:23.943076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.600 [2024-11-20 10:35:23.945046] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.600 [2024-11-20 10:35:23.945132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.600 [2024-11-20 10:35:23.945172] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.600 [2024-11-20 10:35:23.945201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.600 [2024-11-20 10:35:23.945237] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.600 [2024-11-20 10:35:23.945263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.600 "name": "Existed_Raid", 00:12:20.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.600 "strip_size_kb": 0, 00:12:20.600 "state": "configuring", 00:12:20.600 "raid_level": "raid1", 00:12:20.600 "superblock": false, 00:12:20.600 "num_base_bdevs": 4, 00:12:20.600 "num_base_bdevs_discovered": 1, 00:12:20.600 "num_base_bdevs_operational": 4, 00:12:20.600 "base_bdevs_list": [ 00:12:20.600 { 00:12:20.600 "name": "BaseBdev1", 00:12:20.600 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:20.600 "is_configured": true, 00:12:20.600 "data_offset": 0, 00:12:20.600 "data_size": 65536 00:12:20.600 }, 00:12:20.600 { 00:12:20.600 "name": "BaseBdev2", 00:12:20.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.600 "is_configured": false, 00:12:20.600 "data_offset": 0, 00:12:20.600 "data_size": 0 00:12:20.600 }, 00:12:20.600 { 00:12:20.600 "name": "BaseBdev3", 00:12:20.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.600 "is_configured": false, 00:12:20.600 "data_offset": 0, 00:12:20.600 "data_size": 0 00:12:20.600 }, 00:12:20.600 { 00:12:20.600 "name": "BaseBdev4", 00:12:20.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.600 "is_configured": false, 00:12:20.600 "data_offset": 0, 00:12:20.600 "data_size": 0 00:12:20.600 } 00:12:20.600 ] 00:12:20.600 }' 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.600 10:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.169 [2024-11-20 10:35:24.449133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.169 BaseBdev2 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.169 [ 00:12:21.169 { 00:12:21.169 "name": "BaseBdev2", 00:12:21.169 "aliases": [ 00:12:21.169 "270448e0-eb1c-4d9c-92f9-cd7908646e18" 00:12:21.169 ], 00:12:21.169 "product_name": "Malloc disk", 00:12:21.169 "block_size": 512, 00:12:21.169 "num_blocks": 65536, 00:12:21.169 "uuid": "270448e0-eb1c-4d9c-92f9-cd7908646e18", 00:12:21.169 "assigned_rate_limits": { 00:12:21.169 "rw_ios_per_sec": 0, 00:12:21.169 "rw_mbytes_per_sec": 0, 00:12:21.169 "r_mbytes_per_sec": 0, 00:12:21.169 "w_mbytes_per_sec": 0 00:12:21.169 }, 00:12:21.169 "claimed": true, 00:12:21.169 "claim_type": "exclusive_write", 00:12:21.169 "zoned": false, 00:12:21.169 "supported_io_types": { 00:12:21.169 "read": true, 00:12:21.169 "write": true, 00:12:21.169 "unmap": true, 00:12:21.169 "flush": true, 00:12:21.169 "reset": true, 00:12:21.169 "nvme_admin": false, 00:12:21.169 "nvme_io": false, 00:12:21.169 "nvme_io_md": false, 00:12:21.169 "write_zeroes": true, 00:12:21.169 "zcopy": true, 00:12:21.169 "get_zone_info": false, 00:12:21.169 "zone_management": false, 00:12:21.169 "zone_append": false, 00:12:21.169 "compare": false, 00:12:21.169 "compare_and_write": false, 00:12:21.169 "abort": true, 00:12:21.169 "seek_hole": false, 00:12:21.169 "seek_data": false, 00:12:21.169 "copy": true, 00:12:21.169 "nvme_iov_md": false 00:12:21.169 }, 00:12:21.169 "memory_domains": [ 00:12:21.169 { 00:12:21.169 "dma_device_id": "system", 00:12:21.169 "dma_device_type": 1 00:12:21.169 }, 00:12:21.169 { 00:12:21.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.169 "dma_device_type": 2 00:12:21.169 } 00:12:21.169 ], 00:12:21.169 "driver_specific": {} 00:12:21.169 } 00:12:21.169 ] 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.169 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.170 "name": "Existed_Raid", 00:12:21.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.170 "strip_size_kb": 0, 00:12:21.170 "state": "configuring", 00:12:21.170 "raid_level": "raid1", 00:12:21.170 "superblock": false, 00:12:21.170 "num_base_bdevs": 4, 00:12:21.170 "num_base_bdevs_discovered": 2, 00:12:21.170 "num_base_bdevs_operational": 4, 00:12:21.170 "base_bdevs_list": [ 00:12:21.170 { 00:12:21.170 "name": "BaseBdev1", 00:12:21.170 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:21.170 "is_configured": true, 00:12:21.170 "data_offset": 0, 00:12:21.170 "data_size": 65536 00:12:21.170 }, 00:12:21.170 { 00:12:21.170 "name": "BaseBdev2", 00:12:21.170 "uuid": "270448e0-eb1c-4d9c-92f9-cd7908646e18", 00:12:21.170 "is_configured": true, 00:12:21.170 "data_offset": 0, 00:12:21.170 "data_size": 65536 00:12:21.170 }, 00:12:21.170 { 00:12:21.170 "name": "BaseBdev3", 00:12:21.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.170 "is_configured": false, 00:12:21.170 "data_offset": 0, 00:12:21.170 "data_size": 0 00:12:21.170 }, 00:12:21.170 { 00:12:21.170 "name": "BaseBdev4", 00:12:21.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.170 "is_configured": false, 00:12:21.170 "data_offset": 0, 00:12:21.170 "data_size": 0 00:12:21.170 } 00:12:21.170 ] 00:12:21.170 }' 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.170 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.429 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:21.429 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.429 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 BaseBdev3 00:12:21.689 [2024-11-20 10:35:24.936122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 [ 00:12:21.689 { 00:12:21.689 "name": "BaseBdev3", 00:12:21.689 "aliases": [ 00:12:21.689 "aad8a8db-ce65-407e-9a04-b444892923c4" 00:12:21.689 ], 00:12:21.689 "product_name": "Malloc disk", 00:12:21.689 "block_size": 512, 00:12:21.689 "num_blocks": 65536, 00:12:21.689 "uuid": "aad8a8db-ce65-407e-9a04-b444892923c4", 00:12:21.689 "assigned_rate_limits": { 00:12:21.689 "rw_ios_per_sec": 0, 00:12:21.689 "rw_mbytes_per_sec": 0, 00:12:21.689 "r_mbytes_per_sec": 0, 00:12:21.689 "w_mbytes_per_sec": 0 00:12:21.689 }, 00:12:21.689 "claimed": true, 00:12:21.689 "claim_type": "exclusive_write", 00:12:21.689 "zoned": false, 00:12:21.689 "supported_io_types": { 00:12:21.689 "read": true, 00:12:21.689 "write": true, 00:12:21.689 "unmap": true, 00:12:21.689 "flush": true, 00:12:21.689 "reset": true, 00:12:21.689 "nvme_admin": false, 00:12:21.689 "nvme_io": false, 00:12:21.689 "nvme_io_md": false, 00:12:21.689 "write_zeroes": true, 00:12:21.689 "zcopy": true, 00:12:21.689 "get_zone_info": false, 00:12:21.689 "zone_management": false, 00:12:21.689 "zone_append": false, 00:12:21.689 "compare": false, 00:12:21.689 "compare_and_write": false, 00:12:21.689 "abort": true, 00:12:21.689 "seek_hole": false, 00:12:21.689 "seek_data": false, 00:12:21.689 "copy": true, 00:12:21.689 "nvme_iov_md": false 00:12:21.689 }, 00:12:21.689 "memory_domains": [ 00:12:21.689 { 00:12:21.689 "dma_device_id": "system", 00:12:21.689 "dma_device_type": 1 00:12:21.689 }, 00:12:21.689 { 00:12:21.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.689 "dma_device_type": 2 00:12:21.689 } 00:12:21.689 ], 00:12:21.689 "driver_specific": {} 00:12:21.689 } 00:12:21.689 ] 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 10:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.689 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.689 "name": "Existed_Raid", 00:12:21.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.689 "strip_size_kb": 0, 00:12:21.689 "state": "configuring", 00:12:21.689 "raid_level": "raid1", 00:12:21.689 "superblock": false, 00:12:21.689 "num_base_bdevs": 4, 00:12:21.689 "num_base_bdevs_discovered": 3, 00:12:21.689 "num_base_bdevs_operational": 4, 00:12:21.689 "base_bdevs_list": [ 00:12:21.689 { 00:12:21.689 "name": "BaseBdev1", 00:12:21.689 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:21.689 "is_configured": true, 00:12:21.689 "data_offset": 0, 00:12:21.689 "data_size": 65536 00:12:21.689 }, 00:12:21.689 { 00:12:21.689 "name": "BaseBdev2", 00:12:21.690 "uuid": "270448e0-eb1c-4d9c-92f9-cd7908646e18", 00:12:21.690 "is_configured": true, 00:12:21.690 "data_offset": 0, 00:12:21.690 "data_size": 65536 00:12:21.690 }, 00:12:21.690 { 00:12:21.690 "name": "BaseBdev3", 00:12:21.690 "uuid": "aad8a8db-ce65-407e-9a04-b444892923c4", 00:12:21.690 "is_configured": true, 00:12:21.690 "data_offset": 0, 00:12:21.690 "data_size": 65536 00:12:21.690 }, 00:12:21.690 { 00:12:21.690 "name": "BaseBdev4", 00:12:21.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.690 "is_configured": false, 00:12:21.690 "data_offset": 0, 00:12:21.690 "data_size": 0 00:12:21.690 } 00:12:21.690 ] 00:12:21.690 }' 00:12:21.690 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.690 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.949 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:21.949 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.949 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.209 [2024-11-20 10:35:25.441530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.209 [2024-11-20 10:35:25.441656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.209 [2024-11-20 10:35:25.441680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:22.209 [2024-11-20 10:35:25.441992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.209 [2024-11-20 10:35:25.442204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.210 [2024-11-20 10:35:25.442255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:22.210 [2024-11-20 10:35:25.442565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.210 BaseBdev4 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.210 [ 00:12:22.210 { 00:12:22.210 "name": "BaseBdev4", 00:12:22.210 "aliases": [ 00:12:22.210 "9e525801-0f57-49b6-9e31-54ddb3a700d1" 00:12:22.210 ], 00:12:22.210 "product_name": "Malloc disk", 00:12:22.210 "block_size": 512, 00:12:22.210 "num_blocks": 65536, 00:12:22.210 "uuid": "9e525801-0f57-49b6-9e31-54ddb3a700d1", 00:12:22.210 "assigned_rate_limits": { 00:12:22.210 "rw_ios_per_sec": 0, 00:12:22.210 "rw_mbytes_per_sec": 0, 00:12:22.210 "r_mbytes_per_sec": 0, 00:12:22.210 "w_mbytes_per_sec": 0 00:12:22.210 }, 00:12:22.210 "claimed": true, 00:12:22.210 "claim_type": "exclusive_write", 00:12:22.210 "zoned": false, 00:12:22.210 "supported_io_types": { 00:12:22.210 "read": true, 00:12:22.210 "write": true, 00:12:22.210 "unmap": true, 00:12:22.210 "flush": true, 00:12:22.210 "reset": true, 00:12:22.210 "nvme_admin": false, 00:12:22.210 "nvme_io": false, 00:12:22.210 "nvme_io_md": false, 00:12:22.210 "write_zeroes": true, 00:12:22.210 "zcopy": true, 00:12:22.210 "get_zone_info": false, 00:12:22.210 "zone_management": false, 00:12:22.210 "zone_append": false, 00:12:22.210 "compare": false, 00:12:22.210 "compare_and_write": false, 00:12:22.210 "abort": true, 00:12:22.210 "seek_hole": false, 00:12:22.210 "seek_data": false, 00:12:22.210 "copy": true, 00:12:22.210 "nvme_iov_md": false 00:12:22.210 }, 00:12:22.210 "memory_domains": [ 00:12:22.210 { 00:12:22.210 "dma_device_id": "system", 00:12:22.210 "dma_device_type": 1 00:12:22.210 }, 00:12:22.210 { 00:12:22.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.210 "dma_device_type": 2 00:12:22.210 } 00:12:22.210 ], 00:12:22.210 "driver_specific": {} 00:12:22.210 } 00:12:22.210 ] 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.210 "name": "Existed_Raid", 00:12:22.210 "uuid": "23d975ae-5c49-4540-adbb-dfb10ecae69e", 00:12:22.210 "strip_size_kb": 0, 00:12:22.210 "state": "online", 00:12:22.210 "raid_level": "raid1", 00:12:22.210 "superblock": false, 00:12:22.210 "num_base_bdevs": 4, 00:12:22.210 "num_base_bdevs_discovered": 4, 00:12:22.210 "num_base_bdevs_operational": 4, 00:12:22.210 "base_bdevs_list": [ 00:12:22.210 { 00:12:22.210 "name": "BaseBdev1", 00:12:22.210 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:22.210 "is_configured": true, 00:12:22.210 "data_offset": 0, 00:12:22.210 "data_size": 65536 00:12:22.210 }, 00:12:22.210 { 00:12:22.210 "name": "BaseBdev2", 00:12:22.210 "uuid": "270448e0-eb1c-4d9c-92f9-cd7908646e18", 00:12:22.210 "is_configured": true, 00:12:22.210 "data_offset": 0, 00:12:22.210 "data_size": 65536 00:12:22.210 }, 00:12:22.210 { 00:12:22.210 "name": "BaseBdev3", 00:12:22.210 "uuid": "aad8a8db-ce65-407e-9a04-b444892923c4", 00:12:22.210 "is_configured": true, 00:12:22.210 "data_offset": 0, 00:12:22.210 "data_size": 65536 00:12:22.210 }, 00:12:22.210 { 00:12:22.210 "name": "BaseBdev4", 00:12:22.210 "uuid": "9e525801-0f57-49b6-9e31-54ddb3a700d1", 00:12:22.210 "is_configured": true, 00:12:22.210 "data_offset": 0, 00:12:22.210 "data_size": 65536 00:12:22.210 } 00:12:22.210 ] 00:12:22.210 }' 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.210 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.469 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.469 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.469 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.470 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.470 [2024-11-20 10:35:25.925074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.730 10:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.730 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.730 "name": "Existed_Raid", 00:12:22.730 "aliases": [ 00:12:22.730 "23d975ae-5c49-4540-adbb-dfb10ecae69e" 00:12:22.730 ], 00:12:22.730 "product_name": "Raid Volume", 00:12:22.730 "block_size": 512, 00:12:22.730 "num_blocks": 65536, 00:12:22.730 "uuid": "23d975ae-5c49-4540-adbb-dfb10ecae69e", 00:12:22.730 "assigned_rate_limits": { 00:12:22.730 "rw_ios_per_sec": 0, 00:12:22.730 "rw_mbytes_per_sec": 0, 00:12:22.730 "r_mbytes_per_sec": 0, 00:12:22.730 "w_mbytes_per_sec": 0 00:12:22.730 }, 00:12:22.730 "claimed": false, 00:12:22.730 "zoned": false, 00:12:22.730 "supported_io_types": { 00:12:22.730 "read": true, 00:12:22.730 "write": true, 00:12:22.730 "unmap": false, 00:12:22.730 "flush": false, 00:12:22.730 "reset": true, 00:12:22.730 "nvme_admin": false, 00:12:22.730 "nvme_io": false, 00:12:22.730 "nvme_io_md": false, 00:12:22.730 "write_zeroes": true, 00:12:22.730 "zcopy": false, 00:12:22.730 "get_zone_info": false, 00:12:22.730 "zone_management": false, 00:12:22.730 "zone_append": false, 00:12:22.730 "compare": false, 00:12:22.730 "compare_and_write": false, 00:12:22.730 "abort": false, 00:12:22.730 "seek_hole": false, 00:12:22.730 "seek_data": false, 00:12:22.730 "copy": false, 00:12:22.730 "nvme_iov_md": false 00:12:22.730 }, 00:12:22.730 "memory_domains": [ 00:12:22.730 { 00:12:22.730 "dma_device_id": "system", 00:12:22.730 "dma_device_type": 1 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.730 "dma_device_type": 2 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "system", 00:12:22.730 "dma_device_type": 1 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.730 "dma_device_type": 2 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "system", 00:12:22.730 "dma_device_type": 1 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.730 "dma_device_type": 2 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "system", 00:12:22.730 "dma_device_type": 1 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.730 "dma_device_type": 2 00:12:22.730 } 00:12:22.730 ], 00:12:22.730 "driver_specific": { 00:12:22.730 "raid": { 00:12:22.730 "uuid": "23d975ae-5c49-4540-adbb-dfb10ecae69e", 00:12:22.730 "strip_size_kb": 0, 00:12:22.730 "state": "online", 00:12:22.730 "raid_level": "raid1", 00:12:22.730 "superblock": false, 00:12:22.730 "num_base_bdevs": 4, 00:12:22.730 "num_base_bdevs_discovered": 4, 00:12:22.730 "num_base_bdevs_operational": 4, 00:12:22.730 "base_bdevs_list": [ 00:12:22.730 { 00:12:22.730 "name": "BaseBdev1", 00:12:22.730 "uuid": "4196ee3a-7c1e-4e03-916d-09c40a4544c9", 00:12:22.730 "is_configured": true, 00:12:22.730 "data_offset": 0, 00:12:22.730 "data_size": 65536 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "name": "BaseBdev2", 00:12:22.730 "uuid": "270448e0-eb1c-4d9c-92f9-cd7908646e18", 00:12:22.730 "is_configured": true, 00:12:22.730 "data_offset": 0, 00:12:22.730 "data_size": 65536 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "name": "BaseBdev3", 00:12:22.730 "uuid": "aad8a8db-ce65-407e-9a04-b444892923c4", 00:12:22.730 "is_configured": true, 00:12:22.730 "data_offset": 0, 00:12:22.730 "data_size": 65536 00:12:22.730 }, 00:12:22.730 { 00:12:22.730 "name": "BaseBdev4", 00:12:22.730 "uuid": "9e525801-0f57-49b6-9e31-54ddb3a700d1", 00:12:22.730 "is_configured": true, 00:12:22.730 "data_offset": 0, 00:12:22.730 "data_size": 65536 00:12:22.730 } 00:12:22.730 ] 00:12:22.730 } 00:12:22.730 } 00:12:22.730 }' 00:12:22.730 10:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:22.730 BaseBdev2 00:12:22.730 BaseBdev3 00:12:22.730 BaseBdev4' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.730 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.991 [2024-11-20 10:35:26.252287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.991 "name": "Existed_Raid", 00:12:22.991 "uuid": "23d975ae-5c49-4540-adbb-dfb10ecae69e", 00:12:22.991 "strip_size_kb": 0, 00:12:22.991 "state": "online", 00:12:22.991 "raid_level": "raid1", 00:12:22.991 "superblock": false, 00:12:22.991 "num_base_bdevs": 4, 00:12:22.991 "num_base_bdevs_discovered": 3, 00:12:22.991 "num_base_bdevs_operational": 3, 00:12:22.991 "base_bdevs_list": [ 00:12:22.991 { 00:12:22.991 "name": null, 00:12:22.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.991 "is_configured": false, 00:12:22.991 "data_offset": 0, 00:12:22.991 "data_size": 65536 00:12:22.991 }, 00:12:22.991 { 00:12:22.991 "name": "BaseBdev2", 00:12:22.991 "uuid": "270448e0-eb1c-4d9c-92f9-cd7908646e18", 00:12:22.991 "is_configured": true, 00:12:22.991 "data_offset": 0, 00:12:22.991 "data_size": 65536 00:12:22.991 }, 00:12:22.991 { 00:12:22.991 "name": "BaseBdev3", 00:12:22.991 "uuid": "aad8a8db-ce65-407e-9a04-b444892923c4", 00:12:22.991 "is_configured": true, 00:12:22.991 "data_offset": 0, 00:12:22.991 "data_size": 65536 00:12:22.991 }, 00:12:22.991 { 00:12:22.991 "name": "BaseBdev4", 00:12:22.991 "uuid": "9e525801-0f57-49b6-9e31-54ddb3a700d1", 00:12:22.991 "is_configured": true, 00:12:22.991 "data_offset": 0, 00:12:22.991 "data_size": 65536 00:12:22.991 } 00:12:22.991 ] 00:12:22.991 }' 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.991 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.561 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.562 [2024-11-20 10:35:26.897148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.562 10:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.562 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.822 [2024-11-20 10:35:27.052170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.822 [2024-11-20 10:35:27.195665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:23.822 [2024-11-20 10:35:27.195788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.822 [2024-11-20 10:35:27.291960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.822 [2024-11-20 10:35:27.292007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.822 [2024-11-20 10:35:27.292019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.822 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.082 BaseBdev2 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.082 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.082 [ 00:12:24.082 { 00:12:24.082 "name": "BaseBdev2", 00:12:24.082 "aliases": [ 00:12:24.082 "274d7acd-20aa-4f3a-85d6-3bf96a109516" 00:12:24.082 ], 00:12:24.082 "product_name": "Malloc disk", 00:12:24.082 "block_size": 512, 00:12:24.082 "num_blocks": 65536, 00:12:24.082 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:24.082 "assigned_rate_limits": { 00:12:24.082 "rw_ios_per_sec": 0, 00:12:24.082 "rw_mbytes_per_sec": 0, 00:12:24.082 "r_mbytes_per_sec": 0, 00:12:24.082 "w_mbytes_per_sec": 0 00:12:24.082 }, 00:12:24.082 "claimed": false, 00:12:24.082 "zoned": false, 00:12:24.082 "supported_io_types": { 00:12:24.082 "read": true, 00:12:24.082 "write": true, 00:12:24.082 "unmap": true, 00:12:24.082 "flush": true, 00:12:24.082 "reset": true, 00:12:24.082 "nvme_admin": false, 00:12:24.082 "nvme_io": false, 00:12:24.082 "nvme_io_md": false, 00:12:24.082 "write_zeroes": true, 00:12:24.082 "zcopy": true, 00:12:24.082 "get_zone_info": false, 00:12:24.082 "zone_management": false, 00:12:24.082 "zone_append": false, 00:12:24.082 "compare": false, 00:12:24.082 "compare_and_write": false, 00:12:24.082 "abort": true, 00:12:24.082 "seek_hole": false, 00:12:24.082 "seek_data": false, 00:12:24.082 "copy": true, 00:12:24.082 "nvme_iov_md": false 00:12:24.082 }, 00:12:24.082 "memory_domains": [ 00:12:24.082 { 00:12:24.083 "dma_device_id": "system", 00:12:24.083 "dma_device_type": 1 00:12:24.083 }, 00:12:24.083 { 00:12:24.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.083 "dma_device_type": 2 00:12:24.083 } 00:12:24.083 ], 00:12:24.083 "driver_specific": {} 00:12:24.083 } 00:12:24.083 ] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 BaseBdev3 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 [ 00:12:24.083 { 00:12:24.083 "name": "BaseBdev3", 00:12:24.083 "aliases": [ 00:12:24.083 "54ce9824-0066-4a3a-88d9-0e8b15b947ba" 00:12:24.083 ], 00:12:24.083 "product_name": "Malloc disk", 00:12:24.083 "block_size": 512, 00:12:24.083 "num_blocks": 65536, 00:12:24.083 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:24.083 "assigned_rate_limits": { 00:12:24.083 "rw_ios_per_sec": 0, 00:12:24.083 "rw_mbytes_per_sec": 0, 00:12:24.083 "r_mbytes_per_sec": 0, 00:12:24.083 "w_mbytes_per_sec": 0 00:12:24.083 }, 00:12:24.083 "claimed": false, 00:12:24.083 "zoned": false, 00:12:24.083 "supported_io_types": { 00:12:24.083 "read": true, 00:12:24.083 "write": true, 00:12:24.083 "unmap": true, 00:12:24.083 "flush": true, 00:12:24.083 "reset": true, 00:12:24.083 "nvme_admin": false, 00:12:24.083 "nvme_io": false, 00:12:24.083 "nvme_io_md": false, 00:12:24.083 "write_zeroes": true, 00:12:24.083 "zcopy": true, 00:12:24.083 "get_zone_info": false, 00:12:24.083 "zone_management": false, 00:12:24.083 "zone_append": false, 00:12:24.083 "compare": false, 00:12:24.083 "compare_and_write": false, 00:12:24.083 "abort": true, 00:12:24.083 "seek_hole": false, 00:12:24.083 "seek_data": false, 00:12:24.083 "copy": true, 00:12:24.083 "nvme_iov_md": false 00:12:24.083 }, 00:12:24.083 "memory_domains": [ 00:12:24.083 { 00:12:24.083 "dma_device_id": "system", 00:12:24.083 "dma_device_type": 1 00:12:24.083 }, 00:12:24.083 { 00:12:24.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.083 "dma_device_type": 2 00:12:24.083 } 00:12:24.083 ], 00:12:24.083 "driver_specific": {} 00:12:24.083 } 00:12:24.083 ] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 BaseBdev4 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.344 [ 00:12:24.344 { 00:12:24.344 "name": "BaseBdev4", 00:12:24.344 "aliases": [ 00:12:24.344 "31b41cc5-77fd-4a52-b32f-5390b12e378f" 00:12:24.344 ], 00:12:24.344 "product_name": "Malloc disk", 00:12:24.344 "block_size": 512, 00:12:24.344 "num_blocks": 65536, 00:12:24.344 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:24.344 "assigned_rate_limits": { 00:12:24.344 "rw_ios_per_sec": 0, 00:12:24.344 "rw_mbytes_per_sec": 0, 00:12:24.344 "r_mbytes_per_sec": 0, 00:12:24.344 "w_mbytes_per_sec": 0 00:12:24.344 }, 00:12:24.344 "claimed": false, 00:12:24.344 "zoned": false, 00:12:24.344 "supported_io_types": { 00:12:24.344 "read": true, 00:12:24.344 "write": true, 00:12:24.344 "unmap": true, 00:12:24.344 "flush": true, 00:12:24.344 "reset": true, 00:12:24.344 "nvme_admin": false, 00:12:24.344 "nvme_io": false, 00:12:24.344 "nvme_io_md": false, 00:12:24.344 "write_zeroes": true, 00:12:24.344 "zcopy": true, 00:12:24.344 "get_zone_info": false, 00:12:24.344 "zone_management": false, 00:12:24.344 "zone_append": false, 00:12:24.344 "compare": false, 00:12:24.344 "compare_and_write": false, 00:12:24.344 "abort": true, 00:12:24.344 "seek_hole": false, 00:12:24.344 "seek_data": false, 00:12:24.344 "copy": true, 00:12:24.344 "nvme_iov_md": false 00:12:24.344 }, 00:12:24.344 "memory_domains": [ 00:12:24.344 { 00:12:24.344 "dma_device_id": "system", 00:12:24.344 "dma_device_type": 1 00:12:24.344 }, 00:12:24.344 { 00:12:24.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.344 "dma_device_type": 2 00:12:24.344 } 00:12:24.344 ], 00:12:24.344 "driver_specific": {} 00:12:24.344 } 00:12:24.344 ] 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.344 [2024-11-20 10:35:27.597599] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.344 [2024-11-20 10:35:27.597704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.344 [2024-11-20 10:35:27.597729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.344 [2024-11-20 10:35:27.599543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.344 [2024-11-20 10:35:27.599589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.344 "name": "Existed_Raid", 00:12:24.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.344 "strip_size_kb": 0, 00:12:24.344 "state": "configuring", 00:12:24.344 "raid_level": "raid1", 00:12:24.344 "superblock": false, 00:12:24.344 "num_base_bdevs": 4, 00:12:24.344 "num_base_bdevs_discovered": 3, 00:12:24.344 "num_base_bdevs_operational": 4, 00:12:24.344 "base_bdevs_list": [ 00:12:24.344 { 00:12:24.344 "name": "BaseBdev1", 00:12:24.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.344 "is_configured": false, 00:12:24.344 "data_offset": 0, 00:12:24.344 "data_size": 0 00:12:24.344 }, 00:12:24.344 { 00:12:24.344 "name": "BaseBdev2", 00:12:24.344 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:24.344 "is_configured": true, 00:12:24.344 "data_offset": 0, 00:12:24.344 "data_size": 65536 00:12:24.344 }, 00:12:24.344 { 00:12:24.344 "name": "BaseBdev3", 00:12:24.344 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:24.344 "is_configured": true, 00:12:24.344 "data_offset": 0, 00:12:24.344 "data_size": 65536 00:12:24.344 }, 00:12:24.344 { 00:12:24.344 "name": "BaseBdev4", 00:12:24.344 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:24.344 "is_configured": true, 00:12:24.344 "data_offset": 0, 00:12:24.344 "data_size": 65536 00:12:24.344 } 00:12:24.344 ] 00:12:24.344 }' 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.344 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.604 [2024-11-20 10:35:27.945041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.604 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.604 "name": "Existed_Raid", 00:12:24.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.604 "strip_size_kb": 0, 00:12:24.604 "state": "configuring", 00:12:24.604 "raid_level": "raid1", 00:12:24.604 "superblock": false, 00:12:24.604 "num_base_bdevs": 4, 00:12:24.604 "num_base_bdevs_discovered": 2, 00:12:24.604 "num_base_bdevs_operational": 4, 00:12:24.604 "base_bdevs_list": [ 00:12:24.604 { 00:12:24.604 "name": "BaseBdev1", 00:12:24.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.604 "is_configured": false, 00:12:24.604 "data_offset": 0, 00:12:24.604 "data_size": 0 00:12:24.604 }, 00:12:24.604 { 00:12:24.604 "name": null, 00:12:24.604 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:24.604 "is_configured": false, 00:12:24.604 "data_offset": 0, 00:12:24.605 "data_size": 65536 00:12:24.605 }, 00:12:24.605 { 00:12:24.605 "name": "BaseBdev3", 00:12:24.605 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:24.605 "is_configured": true, 00:12:24.605 "data_offset": 0, 00:12:24.605 "data_size": 65536 00:12:24.605 }, 00:12:24.605 { 00:12:24.605 "name": "BaseBdev4", 00:12:24.605 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:24.605 "is_configured": true, 00:12:24.605 "data_offset": 0, 00:12:24.605 "data_size": 65536 00:12:24.605 } 00:12:24.605 ] 00:12:24.605 }' 00:12:24.605 10:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.605 10:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 [2024-11-20 10:35:28.477631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.174 BaseBdev1 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 [ 00:12:25.174 { 00:12:25.174 "name": "BaseBdev1", 00:12:25.174 "aliases": [ 00:12:25.174 "c021475f-6dc7-423d-856b-e1d5899d5cb1" 00:12:25.174 ], 00:12:25.174 "product_name": "Malloc disk", 00:12:25.174 "block_size": 512, 00:12:25.174 "num_blocks": 65536, 00:12:25.174 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:25.174 "assigned_rate_limits": { 00:12:25.174 "rw_ios_per_sec": 0, 00:12:25.174 "rw_mbytes_per_sec": 0, 00:12:25.174 "r_mbytes_per_sec": 0, 00:12:25.174 "w_mbytes_per_sec": 0 00:12:25.174 }, 00:12:25.174 "claimed": true, 00:12:25.174 "claim_type": "exclusive_write", 00:12:25.174 "zoned": false, 00:12:25.174 "supported_io_types": { 00:12:25.174 "read": true, 00:12:25.174 "write": true, 00:12:25.174 "unmap": true, 00:12:25.174 "flush": true, 00:12:25.174 "reset": true, 00:12:25.174 "nvme_admin": false, 00:12:25.174 "nvme_io": false, 00:12:25.174 "nvme_io_md": false, 00:12:25.174 "write_zeroes": true, 00:12:25.174 "zcopy": true, 00:12:25.174 "get_zone_info": false, 00:12:25.174 "zone_management": false, 00:12:25.174 "zone_append": false, 00:12:25.174 "compare": false, 00:12:25.174 "compare_and_write": false, 00:12:25.174 "abort": true, 00:12:25.174 "seek_hole": false, 00:12:25.174 "seek_data": false, 00:12:25.174 "copy": true, 00:12:25.174 "nvme_iov_md": false 00:12:25.174 }, 00:12:25.174 "memory_domains": [ 00:12:25.174 { 00:12:25.174 "dma_device_id": "system", 00:12:25.174 "dma_device_type": 1 00:12:25.174 }, 00:12:25.174 { 00:12:25.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.174 "dma_device_type": 2 00:12:25.174 } 00:12:25.174 ], 00:12:25.174 "driver_specific": {} 00:12:25.174 } 00:12:25.174 ] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.174 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.175 "name": "Existed_Raid", 00:12:25.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.175 "strip_size_kb": 0, 00:12:25.175 "state": "configuring", 00:12:25.175 "raid_level": "raid1", 00:12:25.175 "superblock": false, 00:12:25.175 "num_base_bdevs": 4, 00:12:25.175 "num_base_bdevs_discovered": 3, 00:12:25.175 "num_base_bdevs_operational": 4, 00:12:25.175 "base_bdevs_list": [ 00:12:25.175 { 00:12:25.175 "name": "BaseBdev1", 00:12:25.175 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:25.175 "is_configured": true, 00:12:25.175 "data_offset": 0, 00:12:25.175 "data_size": 65536 00:12:25.175 }, 00:12:25.175 { 00:12:25.175 "name": null, 00:12:25.175 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:25.175 "is_configured": false, 00:12:25.175 "data_offset": 0, 00:12:25.175 "data_size": 65536 00:12:25.175 }, 00:12:25.175 { 00:12:25.175 "name": "BaseBdev3", 00:12:25.175 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:25.175 "is_configured": true, 00:12:25.175 "data_offset": 0, 00:12:25.175 "data_size": 65536 00:12:25.175 }, 00:12:25.175 { 00:12:25.175 "name": "BaseBdev4", 00:12:25.175 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:25.175 "is_configured": true, 00:12:25.175 "data_offset": 0, 00:12:25.175 "data_size": 65536 00:12:25.175 } 00:12:25.175 ] 00:12:25.175 }' 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.175 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.744 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.744 10:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.744 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.744 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.744 10:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.744 [2024-11-20 10:35:29.008844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.744 "name": "Existed_Raid", 00:12:25.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.744 "strip_size_kb": 0, 00:12:25.744 "state": "configuring", 00:12:25.744 "raid_level": "raid1", 00:12:25.744 "superblock": false, 00:12:25.744 "num_base_bdevs": 4, 00:12:25.744 "num_base_bdevs_discovered": 2, 00:12:25.744 "num_base_bdevs_operational": 4, 00:12:25.744 "base_bdevs_list": [ 00:12:25.744 { 00:12:25.744 "name": "BaseBdev1", 00:12:25.744 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:25.744 "is_configured": true, 00:12:25.744 "data_offset": 0, 00:12:25.744 "data_size": 65536 00:12:25.744 }, 00:12:25.744 { 00:12:25.744 "name": null, 00:12:25.744 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:25.744 "is_configured": false, 00:12:25.744 "data_offset": 0, 00:12:25.744 "data_size": 65536 00:12:25.744 }, 00:12:25.744 { 00:12:25.744 "name": null, 00:12:25.744 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:25.744 "is_configured": false, 00:12:25.744 "data_offset": 0, 00:12:25.744 "data_size": 65536 00:12:25.744 }, 00:12:25.744 { 00:12:25.744 "name": "BaseBdev4", 00:12:25.744 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:25.744 "is_configured": true, 00:12:25.744 "data_offset": 0, 00:12:25.744 "data_size": 65536 00:12:25.744 } 00:12:25.744 ] 00:12:25.744 }' 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.744 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.003 [2024-11-20 10:35:29.460031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.003 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.270 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.270 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.270 "name": "Existed_Raid", 00:12:26.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.270 "strip_size_kb": 0, 00:12:26.270 "state": "configuring", 00:12:26.270 "raid_level": "raid1", 00:12:26.270 "superblock": false, 00:12:26.270 "num_base_bdevs": 4, 00:12:26.270 "num_base_bdevs_discovered": 3, 00:12:26.270 "num_base_bdevs_operational": 4, 00:12:26.270 "base_bdevs_list": [ 00:12:26.270 { 00:12:26.270 "name": "BaseBdev1", 00:12:26.270 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:26.270 "is_configured": true, 00:12:26.270 "data_offset": 0, 00:12:26.270 "data_size": 65536 00:12:26.270 }, 00:12:26.270 { 00:12:26.270 "name": null, 00:12:26.270 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:26.270 "is_configured": false, 00:12:26.270 "data_offset": 0, 00:12:26.270 "data_size": 65536 00:12:26.270 }, 00:12:26.270 { 00:12:26.270 "name": "BaseBdev3", 00:12:26.270 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:26.270 "is_configured": true, 00:12:26.270 "data_offset": 0, 00:12:26.270 "data_size": 65536 00:12:26.270 }, 00:12:26.270 { 00:12:26.270 "name": "BaseBdev4", 00:12:26.270 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:26.270 "is_configured": true, 00:12:26.270 "data_offset": 0, 00:12:26.270 "data_size": 65536 00:12:26.270 } 00:12:26.270 ] 00:12:26.270 }' 00:12:26.270 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.270 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.540 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.540 [2024-11-20 10:35:29.927375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.800 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.800 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.800 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.801 "name": "Existed_Raid", 00:12:26.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.801 "strip_size_kb": 0, 00:12:26.801 "state": "configuring", 00:12:26.801 "raid_level": "raid1", 00:12:26.801 "superblock": false, 00:12:26.801 "num_base_bdevs": 4, 00:12:26.801 "num_base_bdevs_discovered": 2, 00:12:26.801 "num_base_bdevs_operational": 4, 00:12:26.801 "base_bdevs_list": [ 00:12:26.801 { 00:12:26.801 "name": null, 00:12:26.801 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:26.801 "is_configured": false, 00:12:26.801 "data_offset": 0, 00:12:26.801 "data_size": 65536 00:12:26.801 }, 00:12:26.801 { 00:12:26.801 "name": null, 00:12:26.801 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:26.801 "is_configured": false, 00:12:26.801 "data_offset": 0, 00:12:26.801 "data_size": 65536 00:12:26.801 }, 00:12:26.801 { 00:12:26.801 "name": "BaseBdev3", 00:12:26.801 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:26.801 "is_configured": true, 00:12:26.801 "data_offset": 0, 00:12:26.801 "data_size": 65536 00:12:26.801 }, 00:12:26.801 { 00:12:26.801 "name": "BaseBdev4", 00:12:26.801 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:26.801 "is_configured": true, 00:12:26.801 "data_offset": 0, 00:12:26.801 "data_size": 65536 00:12:26.801 } 00:12:26.801 ] 00:12:26.801 }' 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.801 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 [2024-11-20 10:35:30.469098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.060 "name": "Existed_Raid", 00:12:27.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.060 "strip_size_kb": 0, 00:12:27.060 "state": "configuring", 00:12:27.060 "raid_level": "raid1", 00:12:27.060 "superblock": false, 00:12:27.060 "num_base_bdevs": 4, 00:12:27.060 "num_base_bdevs_discovered": 3, 00:12:27.060 "num_base_bdevs_operational": 4, 00:12:27.060 "base_bdevs_list": [ 00:12:27.060 { 00:12:27.060 "name": null, 00:12:27.060 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:27.060 "is_configured": false, 00:12:27.060 "data_offset": 0, 00:12:27.060 "data_size": 65536 00:12:27.060 }, 00:12:27.060 { 00:12:27.060 "name": "BaseBdev2", 00:12:27.060 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:27.060 "is_configured": true, 00:12:27.060 "data_offset": 0, 00:12:27.060 "data_size": 65536 00:12:27.060 }, 00:12:27.060 { 00:12:27.060 "name": "BaseBdev3", 00:12:27.060 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:27.060 "is_configured": true, 00:12:27.060 "data_offset": 0, 00:12:27.060 "data_size": 65536 00:12:27.060 }, 00:12:27.060 { 00:12:27.060 "name": "BaseBdev4", 00:12:27.060 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:27.060 "is_configured": true, 00:12:27.060 "data_offset": 0, 00:12:27.060 "data_size": 65536 00:12:27.060 } 00:12:27.060 ] 00:12:27.060 }' 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.060 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:27.629 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c021475f-6dc7-423d-856b-e1d5899d5cb1 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 [2024-11-20 10:35:31.050038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:27.629 [2024-11-20 10:35:31.050170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:27.629 [2024-11-20 10:35:31.050197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:27.629 [2024-11-20 10:35:31.050511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:27.629 [2024-11-20 10:35:31.050742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:27.629 [2024-11-20 10:35:31.050789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:27.629 [2024-11-20 10:35:31.051134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.629 NewBaseBdev 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 [ 00:12:27.629 { 00:12:27.629 "name": "NewBaseBdev", 00:12:27.629 "aliases": [ 00:12:27.629 "c021475f-6dc7-423d-856b-e1d5899d5cb1" 00:12:27.629 ], 00:12:27.629 "product_name": "Malloc disk", 00:12:27.629 "block_size": 512, 00:12:27.629 "num_blocks": 65536, 00:12:27.629 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:27.629 "assigned_rate_limits": { 00:12:27.629 "rw_ios_per_sec": 0, 00:12:27.629 "rw_mbytes_per_sec": 0, 00:12:27.629 "r_mbytes_per_sec": 0, 00:12:27.629 "w_mbytes_per_sec": 0 00:12:27.629 }, 00:12:27.629 "claimed": true, 00:12:27.629 "claim_type": "exclusive_write", 00:12:27.629 "zoned": false, 00:12:27.629 "supported_io_types": { 00:12:27.629 "read": true, 00:12:27.629 "write": true, 00:12:27.629 "unmap": true, 00:12:27.629 "flush": true, 00:12:27.629 "reset": true, 00:12:27.629 "nvme_admin": false, 00:12:27.629 "nvme_io": false, 00:12:27.629 "nvme_io_md": false, 00:12:27.629 "write_zeroes": true, 00:12:27.629 "zcopy": true, 00:12:27.629 "get_zone_info": false, 00:12:27.629 "zone_management": false, 00:12:27.629 "zone_append": false, 00:12:27.629 "compare": false, 00:12:27.629 "compare_and_write": false, 00:12:27.629 "abort": true, 00:12:27.629 "seek_hole": false, 00:12:27.629 "seek_data": false, 00:12:27.629 "copy": true, 00:12:27.629 "nvme_iov_md": false 00:12:27.629 }, 00:12:27.629 "memory_domains": [ 00:12:27.629 { 00:12:27.629 "dma_device_id": "system", 00:12:27.629 "dma_device_type": 1 00:12:27.629 }, 00:12:27.629 { 00:12:27.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.629 "dma_device_type": 2 00:12:27.629 } 00:12:27.629 ], 00:12:27.629 "driver_specific": {} 00:12:27.629 } 00:12:27.629 ] 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.629 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.630 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.630 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.630 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.630 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.630 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.630 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.889 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.889 "name": "Existed_Raid", 00:12:27.889 "uuid": "b98cd624-f8df-4703-9e9e-f4f6e0033f74", 00:12:27.889 "strip_size_kb": 0, 00:12:27.889 "state": "online", 00:12:27.889 "raid_level": "raid1", 00:12:27.889 "superblock": false, 00:12:27.889 "num_base_bdevs": 4, 00:12:27.889 "num_base_bdevs_discovered": 4, 00:12:27.889 "num_base_bdevs_operational": 4, 00:12:27.889 "base_bdevs_list": [ 00:12:27.889 { 00:12:27.889 "name": "NewBaseBdev", 00:12:27.889 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:27.889 "is_configured": true, 00:12:27.889 "data_offset": 0, 00:12:27.889 "data_size": 65536 00:12:27.889 }, 00:12:27.889 { 00:12:27.889 "name": "BaseBdev2", 00:12:27.889 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:27.889 "is_configured": true, 00:12:27.889 "data_offset": 0, 00:12:27.889 "data_size": 65536 00:12:27.889 }, 00:12:27.889 { 00:12:27.889 "name": "BaseBdev3", 00:12:27.889 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:27.889 "is_configured": true, 00:12:27.889 "data_offset": 0, 00:12:27.889 "data_size": 65536 00:12:27.889 }, 00:12:27.889 { 00:12:27.889 "name": "BaseBdev4", 00:12:27.889 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:27.889 "is_configured": true, 00:12:27.889 "data_offset": 0, 00:12:27.889 "data_size": 65536 00:12:27.889 } 00:12:27.889 ] 00:12:27.889 }' 00:12:27.889 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.889 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.148 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.148 [2024-11-20 10:35:31.545659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.149 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.149 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.149 "name": "Existed_Raid", 00:12:28.149 "aliases": [ 00:12:28.149 "b98cd624-f8df-4703-9e9e-f4f6e0033f74" 00:12:28.149 ], 00:12:28.149 "product_name": "Raid Volume", 00:12:28.149 "block_size": 512, 00:12:28.149 "num_blocks": 65536, 00:12:28.149 "uuid": "b98cd624-f8df-4703-9e9e-f4f6e0033f74", 00:12:28.149 "assigned_rate_limits": { 00:12:28.149 "rw_ios_per_sec": 0, 00:12:28.149 "rw_mbytes_per_sec": 0, 00:12:28.149 "r_mbytes_per_sec": 0, 00:12:28.149 "w_mbytes_per_sec": 0 00:12:28.149 }, 00:12:28.149 "claimed": false, 00:12:28.149 "zoned": false, 00:12:28.149 "supported_io_types": { 00:12:28.149 "read": true, 00:12:28.149 "write": true, 00:12:28.149 "unmap": false, 00:12:28.149 "flush": false, 00:12:28.149 "reset": true, 00:12:28.149 "nvme_admin": false, 00:12:28.149 "nvme_io": false, 00:12:28.149 "nvme_io_md": false, 00:12:28.149 "write_zeroes": true, 00:12:28.149 "zcopy": false, 00:12:28.149 "get_zone_info": false, 00:12:28.149 "zone_management": false, 00:12:28.149 "zone_append": false, 00:12:28.149 "compare": false, 00:12:28.149 "compare_and_write": false, 00:12:28.149 "abort": false, 00:12:28.149 "seek_hole": false, 00:12:28.149 "seek_data": false, 00:12:28.149 "copy": false, 00:12:28.149 "nvme_iov_md": false 00:12:28.149 }, 00:12:28.149 "memory_domains": [ 00:12:28.149 { 00:12:28.149 "dma_device_id": "system", 00:12:28.149 "dma_device_type": 1 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.149 "dma_device_type": 2 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "system", 00:12:28.149 "dma_device_type": 1 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.149 "dma_device_type": 2 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "system", 00:12:28.149 "dma_device_type": 1 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.149 "dma_device_type": 2 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "system", 00:12:28.149 "dma_device_type": 1 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.149 "dma_device_type": 2 00:12:28.149 } 00:12:28.149 ], 00:12:28.149 "driver_specific": { 00:12:28.149 "raid": { 00:12:28.149 "uuid": "b98cd624-f8df-4703-9e9e-f4f6e0033f74", 00:12:28.149 "strip_size_kb": 0, 00:12:28.149 "state": "online", 00:12:28.149 "raid_level": "raid1", 00:12:28.149 "superblock": false, 00:12:28.149 "num_base_bdevs": 4, 00:12:28.149 "num_base_bdevs_discovered": 4, 00:12:28.149 "num_base_bdevs_operational": 4, 00:12:28.149 "base_bdevs_list": [ 00:12:28.149 { 00:12:28.149 "name": "NewBaseBdev", 00:12:28.149 "uuid": "c021475f-6dc7-423d-856b-e1d5899d5cb1", 00:12:28.149 "is_configured": true, 00:12:28.149 "data_offset": 0, 00:12:28.149 "data_size": 65536 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "name": "BaseBdev2", 00:12:28.149 "uuid": "274d7acd-20aa-4f3a-85d6-3bf96a109516", 00:12:28.149 "is_configured": true, 00:12:28.149 "data_offset": 0, 00:12:28.149 "data_size": 65536 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "name": "BaseBdev3", 00:12:28.149 "uuid": "54ce9824-0066-4a3a-88d9-0e8b15b947ba", 00:12:28.149 "is_configured": true, 00:12:28.149 "data_offset": 0, 00:12:28.149 "data_size": 65536 00:12:28.149 }, 00:12:28.149 { 00:12:28.149 "name": "BaseBdev4", 00:12:28.149 "uuid": "31b41cc5-77fd-4a52-b32f-5390b12e378f", 00:12:28.149 "is_configured": true, 00:12:28.149 "data_offset": 0, 00:12:28.149 "data_size": 65536 00:12:28.149 } 00:12:28.149 ] 00:12:28.149 } 00:12:28.149 } 00:12:28.149 }' 00:12:28.149 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:28.409 BaseBdev2 00:12:28.409 BaseBdev3 00:12:28.409 BaseBdev4' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 [2024-11-20 10:35:31.848724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.409 [2024-11-20 10:35:31.848798] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.409 [2024-11-20 10:35:31.848912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.409 [2024-11-20 10:35:31.849204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.409 [2024-11-20 10:35:31.849218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73367 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73367 ']' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73367 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.409 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73367 00:12:28.668 killing process with pid 73367 00:12:28.668 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.668 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.668 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73367' 00:12:28.668 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73367 00:12:28.668 [2024-11-20 10:35:31.892886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.668 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73367 00:12:28.928 [2024-11-20 10:35:32.290785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:30.372 00:12:30.372 real 0m11.335s 00:12:30.372 user 0m18.048s 00:12:30.372 sys 0m1.925s 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.372 ************************************ 00:12:30.372 END TEST raid_state_function_test 00:12:30.372 ************************************ 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.372 10:35:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:30.372 10:35:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.372 10:35:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.372 10:35:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.372 ************************************ 00:12:30.372 START TEST raid_state_function_test_sb 00:12:30.372 ************************************ 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74033 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74033' 00:12:30.372 Process raid pid: 74033 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74033 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74033 ']' 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.372 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.372 [2024-11-20 10:35:33.565972] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:30.372 [2024-11-20 10:35:33.566086] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.372 [2024-11-20 10:35:33.743259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.631 [2024-11-20 10:35:33.859199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.631 [2024-11-20 10:35:34.070410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.631 [2024-11-20 10:35:34.070451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.201 [2024-11-20 10:35:34.405621] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.201 [2024-11-20 10:35:34.405673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.201 [2024-11-20 10:35:34.405683] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.201 [2024-11-20 10:35:34.405692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.201 [2024-11-20 10:35:34.405699] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.201 [2024-11-20 10:35:34.405707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.201 [2024-11-20 10:35:34.405717] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.201 [2024-11-20 10:35:34.405725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.201 "name": "Existed_Raid", 00:12:31.201 "uuid": "41e5492a-7312-4426-9eba-4b2378a1764e", 00:12:31.201 "strip_size_kb": 0, 00:12:31.201 "state": "configuring", 00:12:31.201 "raid_level": "raid1", 00:12:31.201 "superblock": true, 00:12:31.201 "num_base_bdevs": 4, 00:12:31.201 "num_base_bdevs_discovered": 0, 00:12:31.201 "num_base_bdevs_operational": 4, 00:12:31.201 "base_bdevs_list": [ 00:12:31.201 { 00:12:31.201 "name": "BaseBdev1", 00:12:31.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.201 "is_configured": false, 00:12:31.201 "data_offset": 0, 00:12:31.201 "data_size": 0 00:12:31.201 }, 00:12:31.201 { 00:12:31.201 "name": "BaseBdev2", 00:12:31.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.201 "is_configured": false, 00:12:31.201 "data_offset": 0, 00:12:31.201 "data_size": 0 00:12:31.201 }, 00:12:31.201 { 00:12:31.201 "name": "BaseBdev3", 00:12:31.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.201 "is_configured": false, 00:12:31.201 "data_offset": 0, 00:12:31.201 "data_size": 0 00:12:31.201 }, 00:12:31.201 { 00:12:31.201 "name": "BaseBdev4", 00:12:31.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.201 "is_configured": false, 00:12:31.201 "data_offset": 0, 00:12:31.201 "data_size": 0 00:12:31.201 } 00:12:31.201 ] 00:12:31.201 }' 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.201 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.462 [2024-11-20 10:35:34.876784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.462 [2024-11-20 10:35:34.876876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.462 [2024-11-20 10:35:34.888757] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.462 [2024-11-20 10:35:34.888835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.462 [2024-11-20 10:35:34.888869] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.462 [2024-11-20 10:35:34.888892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.462 [2024-11-20 10:35:34.888928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.462 [2024-11-20 10:35:34.888951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.462 [2024-11-20 10:35:34.888982] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.462 [2024-11-20 10:35:34.889013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.462 [2024-11-20 10:35:34.934996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.462 BaseBdev1 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.462 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.722 [ 00:12:31.722 { 00:12:31.722 "name": "BaseBdev1", 00:12:31.722 "aliases": [ 00:12:31.722 "6253be9e-d7a7-4f80-bf69-6f43671dac7c" 00:12:31.722 ], 00:12:31.722 "product_name": "Malloc disk", 00:12:31.722 "block_size": 512, 00:12:31.722 "num_blocks": 65536, 00:12:31.722 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:31.722 "assigned_rate_limits": { 00:12:31.722 "rw_ios_per_sec": 0, 00:12:31.722 "rw_mbytes_per_sec": 0, 00:12:31.722 "r_mbytes_per_sec": 0, 00:12:31.722 "w_mbytes_per_sec": 0 00:12:31.722 }, 00:12:31.722 "claimed": true, 00:12:31.722 "claim_type": "exclusive_write", 00:12:31.722 "zoned": false, 00:12:31.722 "supported_io_types": { 00:12:31.722 "read": true, 00:12:31.722 "write": true, 00:12:31.722 "unmap": true, 00:12:31.722 "flush": true, 00:12:31.722 "reset": true, 00:12:31.722 "nvme_admin": false, 00:12:31.722 "nvme_io": false, 00:12:31.722 "nvme_io_md": false, 00:12:31.722 "write_zeroes": true, 00:12:31.722 "zcopy": true, 00:12:31.722 "get_zone_info": false, 00:12:31.722 "zone_management": false, 00:12:31.722 "zone_append": false, 00:12:31.722 "compare": false, 00:12:31.722 "compare_and_write": false, 00:12:31.722 "abort": true, 00:12:31.722 "seek_hole": false, 00:12:31.722 "seek_data": false, 00:12:31.722 "copy": true, 00:12:31.722 "nvme_iov_md": false 00:12:31.722 }, 00:12:31.722 "memory_domains": [ 00:12:31.722 { 00:12:31.722 "dma_device_id": "system", 00:12:31.722 "dma_device_type": 1 00:12:31.722 }, 00:12:31.722 { 00:12:31.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.722 "dma_device_type": 2 00:12:31.722 } 00:12:31.722 ], 00:12:31.722 "driver_specific": {} 00:12:31.722 } 00:12:31.722 ] 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.722 10:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.722 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.722 "name": "Existed_Raid", 00:12:31.722 "uuid": "ce08f558-f826-4b37-bd04-6a6f10625529", 00:12:31.722 "strip_size_kb": 0, 00:12:31.722 "state": "configuring", 00:12:31.722 "raid_level": "raid1", 00:12:31.722 "superblock": true, 00:12:31.722 "num_base_bdevs": 4, 00:12:31.722 "num_base_bdevs_discovered": 1, 00:12:31.722 "num_base_bdevs_operational": 4, 00:12:31.722 "base_bdevs_list": [ 00:12:31.722 { 00:12:31.722 "name": "BaseBdev1", 00:12:31.722 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:31.722 "is_configured": true, 00:12:31.722 "data_offset": 2048, 00:12:31.722 "data_size": 63488 00:12:31.722 }, 00:12:31.722 { 00:12:31.722 "name": "BaseBdev2", 00:12:31.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.722 "is_configured": false, 00:12:31.722 "data_offset": 0, 00:12:31.722 "data_size": 0 00:12:31.722 }, 00:12:31.722 { 00:12:31.722 "name": "BaseBdev3", 00:12:31.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.722 "is_configured": false, 00:12:31.722 "data_offset": 0, 00:12:31.722 "data_size": 0 00:12:31.722 }, 00:12:31.722 { 00:12:31.722 "name": "BaseBdev4", 00:12:31.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.722 "is_configured": false, 00:12:31.722 "data_offset": 0, 00:12:31.722 "data_size": 0 00:12:31.722 } 00:12:31.723 ] 00:12:31.723 }' 00:12:31.723 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.723 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.291 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.291 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 [2024-11-20 10:35:35.474122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.292 [2024-11-20 10:35:35.474179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.292 [2024-11-20 10:35:35.486164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.292 [2024-11-20 10:35:35.488182] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.292 [2024-11-20 10:35:35.488269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.292 [2024-11-20 10:35:35.488299] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.292 [2024-11-20 10:35:35.488324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.292 [2024-11-20 10:35:35.488343] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.292 [2024-11-20 10:35:35.488383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.292 "name": "Existed_Raid", 00:12:32.292 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:32.292 "strip_size_kb": 0, 00:12:32.292 "state": "configuring", 00:12:32.292 "raid_level": "raid1", 00:12:32.292 "superblock": true, 00:12:32.292 "num_base_bdevs": 4, 00:12:32.292 "num_base_bdevs_discovered": 1, 00:12:32.292 "num_base_bdevs_operational": 4, 00:12:32.292 "base_bdevs_list": [ 00:12:32.292 { 00:12:32.292 "name": "BaseBdev1", 00:12:32.292 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:32.292 "is_configured": true, 00:12:32.292 "data_offset": 2048, 00:12:32.292 "data_size": 63488 00:12:32.292 }, 00:12:32.292 { 00:12:32.292 "name": "BaseBdev2", 00:12:32.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.292 "is_configured": false, 00:12:32.292 "data_offset": 0, 00:12:32.292 "data_size": 0 00:12:32.292 }, 00:12:32.292 { 00:12:32.292 "name": "BaseBdev3", 00:12:32.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.292 "is_configured": false, 00:12:32.292 "data_offset": 0, 00:12:32.292 "data_size": 0 00:12:32.292 }, 00:12:32.292 { 00:12:32.292 "name": "BaseBdev4", 00:12:32.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.292 "is_configured": false, 00:12:32.292 "data_offset": 0, 00:12:32.292 "data_size": 0 00:12:32.292 } 00:12:32.292 ] 00:12:32.292 }' 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.292 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.552 [2024-11-20 10:35:35.982405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.552 BaseBdev2 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.552 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.552 [ 00:12:32.552 { 00:12:32.552 "name": "BaseBdev2", 00:12:32.552 "aliases": [ 00:12:32.552 "207438f4-1b5c-417b-8d45-af61f8015cbb" 00:12:32.552 ], 00:12:32.552 "product_name": "Malloc disk", 00:12:32.552 "block_size": 512, 00:12:32.552 "num_blocks": 65536, 00:12:32.552 "uuid": "207438f4-1b5c-417b-8d45-af61f8015cbb", 00:12:32.552 "assigned_rate_limits": { 00:12:32.552 "rw_ios_per_sec": 0, 00:12:32.552 "rw_mbytes_per_sec": 0, 00:12:32.552 "r_mbytes_per_sec": 0, 00:12:32.552 "w_mbytes_per_sec": 0 00:12:32.552 }, 00:12:32.552 "claimed": true, 00:12:32.552 "claim_type": "exclusive_write", 00:12:32.552 "zoned": false, 00:12:32.552 "supported_io_types": { 00:12:32.552 "read": true, 00:12:32.552 "write": true, 00:12:32.552 "unmap": true, 00:12:32.552 "flush": true, 00:12:32.552 "reset": true, 00:12:32.552 "nvme_admin": false, 00:12:32.552 "nvme_io": false, 00:12:32.552 "nvme_io_md": false, 00:12:32.552 "write_zeroes": true, 00:12:32.552 "zcopy": true, 00:12:32.552 "get_zone_info": false, 00:12:32.552 "zone_management": false, 00:12:32.552 "zone_append": false, 00:12:32.552 "compare": false, 00:12:32.552 "compare_and_write": false, 00:12:32.552 "abort": true, 00:12:32.552 "seek_hole": false, 00:12:32.552 "seek_data": false, 00:12:32.552 "copy": true, 00:12:32.552 "nvme_iov_md": false 00:12:32.552 }, 00:12:32.552 "memory_domains": [ 00:12:32.552 { 00:12:32.552 "dma_device_id": "system", 00:12:32.552 "dma_device_type": 1 00:12:32.552 }, 00:12:32.552 { 00:12:32.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.552 "dma_device_type": 2 00:12:32.552 } 00:12:32.552 ], 00:12:32.553 "driver_specific": {} 00:12:32.553 } 00:12:32.553 ] 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.553 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.811 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.811 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.812 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.812 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.812 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.812 "name": "Existed_Raid", 00:12:32.812 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:32.812 "strip_size_kb": 0, 00:12:32.812 "state": "configuring", 00:12:32.812 "raid_level": "raid1", 00:12:32.812 "superblock": true, 00:12:32.812 "num_base_bdevs": 4, 00:12:32.812 "num_base_bdevs_discovered": 2, 00:12:32.812 "num_base_bdevs_operational": 4, 00:12:32.812 "base_bdevs_list": [ 00:12:32.812 { 00:12:32.812 "name": "BaseBdev1", 00:12:32.812 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:32.812 "is_configured": true, 00:12:32.812 "data_offset": 2048, 00:12:32.812 "data_size": 63488 00:12:32.812 }, 00:12:32.812 { 00:12:32.812 "name": "BaseBdev2", 00:12:32.812 "uuid": "207438f4-1b5c-417b-8d45-af61f8015cbb", 00:12:32.812 "is_configured": true, 00:12:32.812 "data_offset": 2048, 00:12:32.812 "data_size": 63488 00:12:32.812 }, 00:12:32.812 { 00:12:32.812 "name": "BaseBdev3", 00:12:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.812 "is_configured": false, 00:12:32.812 "data_offset": 0, 00:12:32.812 "data_size": 0 00:12:32.812 }, 00:12:32.812 { 00:12:32.812 "name": "BaseBdev4", 00:12:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.812 "is_configured": false, 00:12:32.812 "data_offset": 0, 00:12:32.812 "data_size": 0 00:12:32.812 } 00:12:32.812 ] 00:12:32.812 }' 00:12:32.812 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.812 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 [2024-11-20 10:35:36.503130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.072 BaseBdev3 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 [ 00:12:33.072 { 00:12:33.072 "name": "BaseBdev3", 00:12:33.072 "aliases": [ 00:12:33.072 "4cfa1179-2123-4425-9aa1-b4351b0cad1d" 00:12:33.072 ], 00:12:33.072 "product_name": "Malloc disk", 00:12:33.072 "block_size": 512, 00:12:33.072 "num_blocks": 65536, 00:12:33.072 "uuid": "4cfa1179-2123-4425-9aa1-b4351b0cad1d", 00:12:33.072 "assigned_rate_limits": { 00:12:33.072 "rw_ios_per_sec": 0, 00:12:33.072 "rw_mbytes_per_sec": 0, 00:12:33.072 "r_mbytes_per_sec": 0, 00:12:33.072 "w_mbytes_per_sec": 0 00:12:33.072 }, 00:12:33.072 "claimed": true, 00:12:33.072 "claim_type": "exclusive_write", 00:12:33.072 "zoned": false, 00:12:33.072 "supported_io_types": { 00:12:33.072 "read": true, 00:12:33.072 "write": true, 00:12:33.072 "unmap": true, 00:12:33.072 "flush": true, 00:12:33.072 "reset": true, 00:12:33.072 "nvme_admin": false, 00:12:33.072 "nvme_io": false, 00:12:33.072 "nvme_io_md": false, 00:12:33.072 "write_zeroes": true, 00:12:33.072 "zcopy": true, 00:12:33.072 "get_zone_info": false, 00:12:33.072 "zone_management": false, 00:12:33.072 "zone_append": false, 00:12:33.072 "compare": false, 00:12:33.072 "compare_and_write": false, 00:12:33.072 "abort": true, 00:12:33.072 "seek_hole": false, 00:12:33.072 "seek_data": false, 00:12:33.072 "copy": true, 00:12:33.072 "nvme_iov_md": false 00:12:33.072 }, 00:12:33.072 "memory_domains": [ 00:12:33.072 { 00:12:33.072 "dma_device_id": "system", 00:12:33.072 "dma_device_type": 1 00:12:33.072 }, 00:12:33.072 { 00:12:33.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.072 "dma_device_type": 2 00:12:33.072 } 00:12:33.072 ], 00:12:33.072 "driver_specific": {} 00:12:33.072 } 00:12:33.072 ] 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.072 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.333 "name": "Existed_Raid", 00:12:33.333 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:33.333 "strip_size_kb": 0, 00:12:33.333 "state": "configuring", 00:12:33.333 "raid_level": "raid1", 00:12:33.333 "superblock": true, 00:12:33.333 "num_base_bdevs": 4, 00:12:33.333 "num_base_bdevs_discovered": 3, 00:12:33.333 "num_base_bdevs_operational": 4, 00:12:33.333 "base_bdevs_list": [ 00:12:33.333 { 00:12:33.333 "name": "BaseBdev1", 00:12:33.333 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:33.333 "is_configured": true, 00:12:33.333 "data_offset": 2048, 00:12:33.333 "data_size": 63488 00:12:33.333 }, 00:12:33.333 { 00:12:33.333 "name": "BaseBdev2", 00:12:33.333 "uuid": "207438f4-1b5c-417b-8d45-af61f8015cbb", 00:12:33.333 "is_configured": true, 00:12:33.333 "data_offset": 2048, 00:12:33.333 "data_size": 63488 00:12:33.333 }, 00:12:33.333 { 00:12:33.333 "name": "BaseBdev3", 00:12:33.333 "uuid": "4cfa1179-2123-4425-9aa1-b4351b0cad1d", 00:12:33.333 "is_configured": true, 00:12:33.333 "data_offset": 2048, 00:12:33.333 "data_size": 63488 00:12:33.333 }, 00:12:33.333 { 00:12:33.333 "name": "BaseBdev4", 00:12:33.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.333 "is_configured": false, 00:12:33.333 "data_offset": 0, 00:12:33.333 "data_size": 0 00:12:33.333 } 00:12:33.333 ] 00:12:33.333 }' 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.333 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.593 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.593 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.593 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.593 [2024-11-20 10:35:37.020117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.593 [2024-11-20 10:35:37.020545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.594 [2024-11-20 10:35:37.020604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.594 [2024-11-20 10:35:37.020927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.594 BaseBdev4 00:12:33.594 [2024-11-20 10:35:37.021163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.594 [2024-11-20 10:35:37.021182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.594 [2024-11-20 10:35:37.021346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.594 [ 00:12:33.594 { 00:12:33.594 "name": "BaseBdev4", 00:12:33.594 "aliases": [ 00:12:33.594 "a2d7ea3d-fb88-46fd-ad99-abac9eb42daa" 00:12:33.594 ], 00:12:33.594 "product_name": "Malloc disk", 00:12:33.594 "block_size": 512, 00:12:33.594 "num_blocks": 65536, 00:12:33.594 "uuid": "a2d7ea3d-fb88-46fd-ad99-abac9eb42daa", 00:12:33.594 "assigned_rate_limits": { 00:12:33.594 "rw_ios_per_sec": 0, 00:12:33.594 "rw_mbytes_per_sec": 0, 00:12:33.594 "r_mbytes_per_sec": 0, 00:12:33.594 "w_mbytes_per_sec": 0 00:12:33.594 }, 00:12:33.594 "claimed": true, 00:12:33.594 "claim_type": "exclusive_write", 00:12:33.594 "zoned": false, 00:12:33.594 "supported_io_types": { 00:12:33.594 "read": true, 00:12:33.594 "write": true, 00:12:33.594 "unmap": true, 00:12:33.594 "flush": true, 00:12:33.594 "reset": true, 00:12:33.594 "nvme_admin": false, 00:12:33.594 "nvme_io": false, 00:12:33.594 "nvme_io_md": false, 00:12:33.594 "write_zeroes": true, 00:12:33.594 "zcopy": true, 00:12:33.594 "get_zone_info": false, 00:12:33.594 "zone_management": false, 00:12:33.594 "zone_append": false, 00:12:33.594 "compare": false, 00:12:33.594 "compare_and_write": false, 00:12:33.594 "abort": true, 00:12:33.594 "seek_hole": false, 00:12:33.594 "seek_data": false, 00:12:33.594 "copy": true, 00:12:33.594 "nvme_iov_md": false 00:12:33.594 }, 00:12:33.594 "memory_domains": [ 00:12:33.594 { 00:12:33.594 "dma_device_id": "system", 00:12:33.594 "dma_device_type": 1 00:12:33.594 }, 00:12:33.594 { 00:12:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.594 "dma_device_type": 2 00:12:33.594 } 00:12:33.594 ], 00:12:33.594 "driver_specific": {} 00:12:33.594 } 00:12:33.594 ] 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.594 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.854 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.854 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.854 "name": "Existed_Raid", 00:12:33.854 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:33.854 "strip_size_kb": 0, 00:12:33.854 "state": "online", 00:12:33.854 "raid_level": "raid1", 00:12:33.854 "superblock": true, 00:12:33.854 "num_base_bdevs": 4, 00:12:33.854 "num_base_bdevs_discovered": 4, 00:12:33.854 "num_base_bdevs_operational": 4, 00:12:33.854 "base_bdevs_list": [ 00:12:33.854 { 00:12:33.854 "name": "BaseBdev1", 00:12:33.854 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:33.854 "is_configured": true, 00:12:33.854 "data_offset": 2048, 00:12:33.854 "data_size": 63488 00:12:33.854 }, 00:12:33.854 { 00:12:33.854 "name": "BaseBdev2", 00:12:33.854 "uuid": "207438f4-1b5c-417b-8d45-af61f8015cbb", 00:12:33.854 "is_configured": true, 00:12:33.854 "data_offset": 2048, 00:12:33.854 "data_size": 63488 00:12:33.854 }, 00:12:33.854 { 00:12:33.854 "name": "BaseBdev3", 00:12:33.854 "uuid": "4cfa1179-2123-4425-9aa1-b4351b0cad1d", 00:12:33.854 "is_configured": true, 00:12:33.854 "data_offset": 2048, 00:12:33.854 "data_size": 63488 00:12:33.854 }, 00:12:33.854 { 00:12:33.854 "name": "BaseBdev4", 00:12:33.854 "uuid": "a2d7ea3d-fb88-46fd-ad99-abac9eb42daa", 00:12:33.854 "is_configured": true, 00:12:33.854 "data_offset": 2048, 00:12:33.854 "data_size": 63488 00:12:33.854 } 00:12:33.854 ] 00:12:33.854 }' 00:12:33.854 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.854 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.113 [2024-11-20 10:35:37.515772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.113 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.113 "name": "Existed_Raid", 00:12:34.113 "aliases": [ 00:12:34.113 "36556ec6-f9d0-45af-9455-48c0e2e2320c" 00:12:34.113 ], 00:12:34.113 "product_name": "Raid Volume", 00:12:34.113 "block_size": 512, 00:12:34.113 "num_blocks": 63488, 00:12:34.113 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:34.113 "assigned_rate_limits": { 00:12:34.113 "rw_ios_per_sec": 0, 00:12:34.113 "rw_mbytes_per_sec": 0, 00:12:34.113 "r_mbytes_per_sec": 0, 00:12:34.113 "w_mbytes_per_sec": 0 00:12:34.113 }, 00:12:34.113 "claimed": false, 00:12:34.113 "zoned": false, 00:12:34.113 "supported_io_types": { 00:12:34.113 "read": true, 00:12:34.113 "write": true, 00:12:34.113 "unmap": false, 00:12:34.113 "flush": false, 00:12:34.113 "reset": true, 00:12:34.113 "nvme_admin": false, 00:12:34.113 "nvme_io": false, 00:12:34.113 "nvme_io_md": false, 00:12:34.113 "write_zeroes": true, 00:12:34.113 "zcopy": false, 00:12:34.113 "get_zone_info": false, 00:12:34.113 "zone_management": false, 00:12:34.113 "zone_append": false, 00:12:34.113 "compare": false, 00:12:34.113 "compare_and_write": false, 00:12:34.113 "abort": false, 00:12:34.113 "seek_hole": false, 00:12:34.113 "seek_data": false, 00:12:34.113 "copy": false, 00:12:34.113 "nvme_iov_md": false 00:12:34.113 }, 00:12:34.113 "memory_domains": [ 00:12:34.113 { 00:12:34.113 "dma_device_id": "system", 00:12:34.113 "dma_device_type": 1 00:12:34.113 }, 00:12:34.113 { 00:12:34.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.113 "dma_device_type": 2 00:12:34.113 }, 00:12:34.113 { 00:12:34.114 "dma_device_id": "system", 00:12:34.114 "dma_device_type": 1 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.114 "dma_device_type": 2 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "dma_device_id": "system", 00:12:34.114 "dma_device_type": 1 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.114 "dma_device_type": 2 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "dma_device_id": "system", 00:12:34.114 "dma_device_type": 1 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.114 "dma_device_type": 2 00:12:34.114 } 00:12:34.114 ], 00:12:34.114 "driver_specific": { 00:12:34.114 "raid": { 00:12:34.114 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:34.114 "strip_size_kb": 0, 00:12:34.114 "state": "online", 00:12:34.114 "raid_level": "raid1", 00:12:34.114 "superblock": true, 00:12:34.114 "num_base_bdevs": 4, 00:12:34.114 "num_base_bdevs_discovered": 4, 00:12:34.114 "num_base_bdevs_operational": 4, 00:12:34.114 "base_bdevs_list": [ 00:12:34.114 { 00:12:34.114 "name": "BaseBdev1", 00:12:34.114 "uuid": "6253be9e-d7a7-4f80-bf69-6f43671dac7c", 00:12:34.114 "is_configured": true, 00:12:34.114 "data_offset": 2048, 00:12:34.114 "data_size": 63488 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "name": "BaseBdev2", 00:12:34.114 "uuid": "207438f4-1b5c-417b-8d45-af61f8015cbb", 00:12:34.114 "is_configured": true, 00:12:34.114 "data_offset": 2048, 00:12:34.114 "data_size": 63488 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "name": "BaseBdev3", 00:12:34.114 "uuid": "4cfa1179-2123-4425-9aa1-b4351b0cad1d", 00:12:34.114 "is_configured": true, 00:12:34.114 "data_offset": 2048, 00:12:34.114 "data_size": 63488 00:12:34.114 }, 00:12:34.114 { 00:12:34.114 "name": "BaseBdev4", 00:12:34.114 "uuid": "a2d7ea3d-fb88-46fd-ad99-abac9eb42daa", 00:12:34.114 "is_configured": true, 00:12:34.114 "data_offset": 2048, 00:12:34.114 "data_size": 63488 00:12:34.114 } 00:12:34.114 ] 00:12:34.114 } 00:12:34.114 } 00:12:34.114 }' 00:12:34.114 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.373 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.373 BaseBdev2 00:12:34.373 BaseBdev3 00:12:34.373 BaseBdev4' 00:12:34.373 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.373 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.373 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.373 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.374 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.633 [2024-11-20 10:35:37.846879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.633 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.633 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.633 "name": "Existed_Raid", 00:12:34.634 "uuid": "36556ec6-f9d0-45af-9455-48c0e2e2320c", 00:12:34.634 "strip_size_kb": 0, 00:12:34.634 "state": "online", 00:12:34.634 "raid_level": "raid1", 00:12:34.634 "superblock": true, 00:12:34.634 "num_base_bdevs": 4, 00:12:34.634 "num_base_bdevs_discovered": 3, 00:12:34.634 "num_base_bdevs_operational": 3, 00:12:34.634 "base_bdevs_list": [ 00:12:34.634 { 00:12:34.634 "name": null, 00:12:34.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.634 "is_configured": false, 00:12:34.634 "data_offset": 0, 00:12:34.634 "data_size": 63488 00:12:34.634 }, 00:12:34.634 { 00:12:34.634 "name": "BaseBdev2", 00:12:34.634 "uuid": "207438f4-1b5c-417b-8d45-af61f8015cbb", 00:12:34.634 "is_configured": true, 00:12:34.634 "data_offset": 2048, 00:12:34.634 "data_size": 63488 00:12:34.634 }, 00:12:34.634 { 00:12:34.634 "name": "BaseBdev3", 00:12:34.634 "uuid": "4cfa1179-2123-4425-9aa1-b4351b0cad1d", 00:12:34.634 "is_configured": true, 00:12:34.634 "data_offset": 2048, 00:12:34.634 "data_size": 63488 00:12:34.634 }, 00:12:34.634 { 00:12:34.634 "name": "BaseBdev4", 00:12:34.634 "uuid": "a2d7ea3d-fb88-46fd-ad99-abac9eb42daa", 00:12:34.634 "is_configured": true, 00:12:34.634 "data_offset": 2048, 00:12:34.634 "data_size": 63488 00:12:34.634 } 00:12:34.634 ] 00:12:34.634 }' 00:12:34.634 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.634 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.201 [2024-11-20 10:35:38.435858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:35.201 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.202 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.202 [2024-11-20 10:35:38.593079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.461 [2024-11-20 10:35:38.747922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:35.461 [2024-11-20 10:35:38.748034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.461 [2024-11-20 10:35:38.854338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.461 [2024-11-20 10:35:38.854464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.461 [2024-11-20 10:35:38.854513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.461 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.462 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.721 BaseBdev2 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.721 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 [ 00:12:35.722 { 00:12:35.722 "name": "BaseBdev2", 00:12:35.722 "aliases": [ 00:12:35.722 "cdeaf948-6257-4090-8aef-330c843c9776" 00:12:35.722 ], 00:12:35.722 "product_name": "Malloc disk", 00:12:35.722 "block_size": 512, 00:12:35.722 "num_blocks": 65536, 00:12:35.722 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:35.722 "assigned_rate_limits": { 00:12:35.722 "rw_ios_per_sec": 0, 00:12:35.722 "rw_mbytes_per_sec": 0, 00:12:35.722 "r_mbytes_per_sec": 0, 00:12:35.722 "w_mbytes_per_sec": 0 00:12:35.722 }, 00:12:35.722 "claimed": false, 00:12:35.722 "zoned": false, 00:12:35.722 "supported_io_types": { 00:12:35.722 "read": true, 00:12:35.722 "write": true, 00:12:35.722 "unmap": true, 00:12:35.722 "flush": true, 00:12:35.722 "reset": true, 00:12:35.722 "nvme_admin": false, 00:12:35.722 "nvme_io": false, 00:12:35.722 "nvme_io_md": false, 00:12:35.722 "write_zeroes": true, 00:12:35.722 "zcopy": true, 00:12:35.722 "get_zone_info": false, 00:12:35.722 "zone_management": false, 00:12:35.722 "zone_append": false, 00:12:35.722 "compare": false, 00:12:35.722 "compare_and_write": false, 00:12:35.722 "abort": true, 00:12:35.722 "seek_hole": false, 00:12:35.722 "seek_data": false, 00:12:35.722 "copy": true, 00:12:35.722 "nvme_iov_md": false 00:12:35.722 }, 00:12:35.722 "memory_domains": [ 00:12:35.722 { 00:12:35.722 "dma_device_id": "system", 00:12:35.722 "dma_device_type": 1 00:12:35.722 }, 00:12:35.722 { 00:12:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.722 "dma_device_type": 2 00:12:35.722 } 00:12:35.722 ], 00:12:35.722 "driver_specific": {} 00:12:35.722 } 00:12:35.722 ] 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 BaseBdev3 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 [ 00:12:35.722 { 00:12:35.722 "name": "BaseBdev3", 00:12:35.722 "aliases": [ 00:12:35.722 "84917326-e06a-42a9-900d-77067c8ef11a" 00:12:35.722 ], 00:12:35.722 "product_name": "Malloc disk", 00:12:35.722 "block_size": 512, 00:12:35.722 "num_blocks": 65536, 00:12:35.722 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:35.722 "assigned_rate_limits": { 00:12:35.722 "rw_ios_per_sec": 0, 00:12:35.722 "rw_mbytes_per_sec": 0, 00:12:35.722 "r_mbytes_per_sec": 0, 00:12:35.722 "w_mbytes_per_sec": 0 00:12:35.722 }, 00:12:35.722 "claimed": false, 00:12:35.722 "zoned": false, 00:12:35.722 "supported_io_types": { 00:12:35.722 "read": true, 00:12:35.722 "write": true, 00:12:35.722 "unmap": true, 00:12:35.722 "flush": true, 00:12:35.722 "reset": true, 00:12:35.722 "nvme_admin": false, 00:12:35.722 "nvme_io": false, 00:12:35.722 "nvme_io_md": false, 00:12:35.722 "write_zeroes": true, 00:12:35.722 "zcopy": true, 00:12:35.722 "get_zone_info": false, 00:12:35.722 "zone_management": false, 00:12:35.722 "zone_append": false, 00:12:35.722 "compare": false, 00:12:35.722 "compare_and_write": false, 00:12:35.722 "abort": true, 00:12:35.722 "seek_hole": false, 00:12:35.722 "seek_data": false, 00:12:35.722 "copy": true, 00:12:35.722 "nvme_iov_md": false 00:12:35.722 }, 00:12:35.722 "memory_domains": [ 00:12:35.722 { 00:12:35.722 "dma_device_id": "system", 00:12:35.722 "dma_device_type": 1 00:12:35.722 }, 00:12:35.722 { 00:12:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.722 "dma_device_type": 2 00:12:35.722 } 00:12:35.722 ], 00:12:35.722 "driver_specific": {} 00:12:35.722 } 00:12:35.722 ] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 BaseBdev4 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.722 [ 00:12:35.722 { 00:12:35.722 "name": "BaseBdev4", 00:12:35.722 "aliases": [ 00:12:35.722 "eb2e24b8-11f6-4768-9291-2d222a0220ba" 00:12:35.722 ], 00:12:35.722 "product_name": "Malloc disk", 00:12:35.722 "block_size": 512, 00:12:35.722 "num_blocks": 65536, 00:12:35.722 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:35.722 "assigned_rate_limits": { 00:12:35.722 "rw_ios_per_sec": 0, 00:12:35.722 "rw_mbytes_per_sec": 0, 00:12:35.722 "r_mbytes_per_sec": 0, 00:12:35.722 "w_mbytes_per_sec": 0 00:12:35.722 }, 00:12:35.722 "claimed": false, 00:12:35.722 "zoned": false, 00:12:35.722 "supported_io_types": { 00:12:35.722 "read": true, 00:12:35.722 "write": true, 00:12:35.722 "unmap": true, 00:12:35.722 "flush": true, 00:12:35.722 "reset": true, 00:12:35.722 "nvme_admin": false, 00:12:35.722 "nvme_io": false, 00:12:35.722 "nvme_io_md": false, 00:12:35.722 "write_zeroes": true, 00:12:35.722 "zcopy": true, 00:12:35.722 "get_zone_info": false, 00:12:35.722 "zone_management": false, 00:12:35.722 "zone_append": false, 00:12:35.722 "compare": false, 00:12:35.722 "compare_and_write": false, 00:12:35.722 "abort": true, 00:12:35.722 "seek_hole": false, 00:12:35.722 "seek_data": false, 00:12:35.722 "copy": true, 00:12:35.722 "nvme_iov_md": false 00:12:35.722 }, 00:12:35.722 "memory_domains": [ 00:12:35.722 { 00:12:35.722 "dma_device_id": "system", 00:12:35.722 "dma_device_type": 1 00:12:35.722 }, 00:12:35.722 { 00:12:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.722 "dma_device_type": 2 00:12:35.722 } 00:12:35.722 ], 00:12:35.722 "driver_specific": {} 00:12:35.722 } 00:12:35.722 ] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.722 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.723 [2024-11-20 10:35:39.158639] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.723 [2024-11-20 10:35:39.158744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.723 [2024-11-20 10:35:39.158805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.723 [2024-11-20 10:35:39.160688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.723 [2024-11-20 10:35:39.160783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.723 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.982 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.982 "name": "Existed_Raid", 00:12:35.982 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:35.982 "strip_size_kb": 0, 00:12:35.982 "state": "configuring", 00:12:35.982 "raid_level": "raid1", 00:12:35.982 "superblock": true, 00:12:35.982 "num_base_bdevs": 4, 00:12:35.982 "num_base_bdevs_discovered": 3, 00:12:35.982 "num_base_bdevs_operational": 4, 00:12:35.982 "base_bdevs_list": [ 00:12:35.982 { 00:12:35.982 "name": "BaseBdev1", 00:12:35.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.982 "is_configured": false, 00:12:35.982 "data_offset": 0, 00:12:35.982 "data_size": 0 00:12:35.982 }, 00:12:35.982 { 00:12:35.982 "name": "BaseBdev2", 00:12:35.982 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:35.982 "is_configured": true, 00:12:35.982 "data_offset": 2048, 00:12:35.982 "data_size": 63488 00:12:35.982 }, 00:12:35.982 { 00:12:35.982 "name": "BaseBdev3", 00:12:35.982 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:35.982 "is_configured": true, 00:12:35.982 "data_offset": 2048, 00:12:35.982 "data_size": 63488 00:12:35.982 }, 00:12:35.982 { 00:12:35.982 "name": "BaseBdev4", 00:12:35.982 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:35.982 "is_configured": true, 00:12:35.982 "data_offset": 2048, 00:12:35.982 "data_size": 63488 00:12:35.982 } 00:12:35.982 ] 00:12:35.982 }' 00:12:35.982 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.982 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.241 [2024-11-20 10:35:39.637867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.241 "name": "Existed_Raid", 00:12:36.241 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:36.241 "strip_size_kb": 0, 00:12:36.241 "state": "configuring", 00:12:36.241 "raid_level": "raid1", 00:12:36.241 "superblock": true, 00:12:36.241 "num_base_bdevs": 4, 00:12:36.241 "num_base_bdevs_discovered": 2, 00:12:36.241 "num_base_bdevs_operational": 4, 00:12:36.241 "base_bdevs_list": [ 00:12:36.241 { 00:12:36.241 "name": "BaseBdev1", 00:12:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.241 "is_configured": false, 00:12:36.241 "data_offset": 0, 00:12:36.241 "data_size": 0 00:12:36.241 }, 00:12:36.241 { 00:12:36.241 "name": null, 00:12:36.241 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:36.241 "is_configured": false, 00:12:36.241 "data_offset": 0, 00:12:36.241 "data_size": 63488 00:12:36.241 }, 00:12:36.241 { 00:12:36.241 "name": "BaseBdev3", 00:12:36.241 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:36.241 "is_configured": true, 00:12:36.241 "data_offset": 2048, 00:12:36.241 "data_size": 63488 00:12:36.241 }, 00:12:36.241 { 00:12:36.241 "name": "BaseBdev4", 00:12:36.241 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:36.241 "is_configured": true, 00:12:36.241 "data_offset": 2048, 00:12:36.241 "data_size": 63488 00:12:36.241 } 00:12:36.241 ] 00:12:36.241 }' 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.241 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.809 [2024-11-20 10:35:40.138406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.809 BaseBdev1 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.809 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.810 [ 00:12:36.810 { 00:12:36.810 "name": "BaseBdev1", 00:12:36.810 "aliases": [ 00:12:36.810 "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e" 00:12:36.810 ], 00:12:36.810 "product_name": "Malloc disk", 00:12:36.810 "block_size": 512, 00:12:36.810 "num_blocks": 65536, 00:12:36.810 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:36.810 "assigned_rate_limits": { 00:12:36.810 "rw_ios_per_sec": 0, 00:12:36.810 "rw_mbytes_per_sec": 0, 00:12:36.810 "r_mbytes_per_sec": 0, 00:12:36.810 "w_mbytes_per_sec": 0 00:12:36.810 }, 00:12:36.810 "claimed": true, 00:12:36.810 "claim_type": "exclusive_write", 00:12:36.810 "zoned": false, 00:12:36.810 "supported_io_types": { 00:12:36.810 "read": true, 00:12:36.810 "write": true, 00:12:36.810 "unmap": true, 00:12:36.810 "flush": true, 00:12:36.810 "reset": true, 00:12:36.810 "nvme_admin": false, 00:12:36.810 "nvme_io": false, 00:12:36.810 "nvme_io_md": false, 00:12:36.810 "write_zeroes": true, 00:12:36.810 "zcopy": true, 00:12:36.810 "get_zone_info": false, 00:12:36.810 "zone_management": false, 00:12:36.810 "zone_append": false, 00:12:36.810 "compare": false, 00:12:36.810 "compare_and_write": false, 00:12:36.810 "abort": true, 00:12:36.810 "seek_hole": false, 00:12:36.810 "seek_data": false, 00:12:36.810 "copy": true, 00:12:36.810 "nvme_iov_md": false 00:12:36.810 }, 00:12:36.810 "memory_domains": [ 00:12:36.810 { 00:12:36.810 "dma_device_id": "system", 00:12:36.810 "dma_device_type": 1 00:12:36.810 }, 00:12:36.810 { 00:12:36.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.810 "dma_device_type": 2 00:12:36.810 } 00:12:36.810 ], 00:12:36.810 "driver_specific": {} 00:12:36.810 } 00:12:36.810 ] 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.810 "name": "Existed_Raid", 00:12:36.810 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:36.810 "strip_size_kb": 0, 00:12:36.810 "state": "configuring", 00:12:36.810 "raid_level": "raid1", 00:12:36.810 "superblock": true, 00:12:36.810 "num_base_bdevs": 4, 00:12:36.810 "num_base_bdevs_discovered": 3, 00:12:36.810 "num_base_bdevs_operational": 4, 00:12:36.810 "base_bdevs_list": [ 00:12:36.810 { 00:12:36.810 "name": "BaseBdev1", 00:12:36.810 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:36.810 "is_configured": true, 00:12:36.810 "data_offset": 2048, 00:12:36.810 "data_size": 63488 00:12:36.810 }, 00:12:36.810 { 00:12:36.810 "name": null, 00:12:36.810 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:36.810 "is_configured": false, 00:12:36.810 "data_offset": 0, 00:12:36.810 "data_size": 63488 00:12:36.810 }, 00:12:36.810 { 00:12:36.810 "name": "BaseBdev3", 00:12:36.810 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:36.810 "is_configured": true, 00:12:36.810 "data_offset": 2048, 00:12:36.810 "data_size": 63488 00:12:36.810 }, 00:12:36.810 { 00:12:36.810 "name": "BaseBdev4", 00:12:36.810 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:36.810 "is_configured": true, 00:12:36.810 "data_offset": 2048, 00:12:36.810 "data_size": 63488 00:12:36.810 } 00:12:36.810 ] 00:12:36.810 }' 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.810 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.385 [2024-11-20 10:35:40.677581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.385 "name": "Existed_Raid", 00:12:37.385 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:37.385 "strip_size_kb": 0, 00:12:37.385 "state": "configuring", 00:12:37.385 "raid_level": "raid1", 00:12:37.385 "superblock": true, 00:12:37.385 "num_base_bdevs": 4, 00:12:37.385 "num_base_bdevs_discovered": 2, 00:12:37.385 "num_base_bdevs_operational": 4, 00:12:37.385 "base_bdevs_list": [ 00:12:37.385 { 00:12:37.385 "name": "BaseBdev1", 00:12:37.385 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:37.385 "is_configured": true, 00:12:37.385 "data_offset": 2048, 00:12:37.385 "data_size": 63488 00:12:37.385 }, 00:12:37.385 { 00:12:37.385 "name": null, 00:12:37.385 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:37.385 "is_configured": false, 00:12:37.385 "data_offset": 0, 00:12:37.385 "data_size": 63488 00:12:37.385 }, 00:12:37.385 { 00:12:37.385 "name": null, 00:12:37.385 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:37.385 "is_configured": false, 00:12:37.385 "data_offset": 0, 00:12:37.385 "data_size": 63488 00:12:37.385 }, 00:12:37.385 { 00:12:37.385 "name": "BaseBdev4", 00:12:37.385 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:37.385 "is_configured": true, 00:12:37.385 "data_offset": 2048, 00:12:37.385 "data_size": 63488 00:12:37.385 } 00:12:37.385 ] 00:12:37.385 }' 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.385 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.645 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.645 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.645 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.645 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.645 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.905 [2024-11-20 10:35:41.132783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.905 "name": "Existed_Raid", 00:12:37.905 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:37.905 "strip_size_kb": 0, 00:12:37.905 "state": "configuring", 00:12:37.905 "raid_level": "raid1", 00:12:37.905 "superblock": true, 00:12:37.905 "num_base_bdevs": 4, 00:12:37.905 "num_base_bdevs_discovered": 3, 00:12:37.905 "num_base_bdevs_operational": 4, 00:12:37.905 "base_bdevs_list": [ 00:12:37.905 { 00:12:37.905 "name": "BaseBdev1", 00:12:37.905 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:37.905 "is_configured": true, 00:12:37.905 "data_offset": 2048, 00:12:37.905 "data_size": 63488 00:12:37.905 }, 00:12:37.905 { 00:12:37.905 "name": null, 00:12:37.905 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:37.905 "is_configured": false, 00:12:37.905 "data_offset": 0, 00:12:37.905 "data_size": 63488 00:12:37.905 }, 00:12:37.905 { 00:12:37.905 "name": "BaseBdev3", 00:12:37.905 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:37.905 "is_configured": true, 00:12:37.905 "data_offset": 2048, 00:12:37.905 "data_size": 63488 00:12:37.905 }, 00:12:37.905 { 00:12:37.905 "name": "BaseBdev4", 00:12:37.905 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:37.905 "is_configured": true, 00:12:37.905 "data_offset": 2048, 00:12:37.905 "data_size": 63488 00:12:37.905 } 00:12:37.905 ] 00:12:37.905 }' 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.905 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.165 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.165 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.165 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.166 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.166 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.425 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.425 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.425 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.425 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.425 [2024-11-20 10:35:41.671946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.425 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.426 "name": "Existed_Raid", 00:12:38.426 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:38.426 "strip_size_kb": 0, 00:12:38.426 "state": "configuring", 00:12:38.426 "raid_level": "raid1", 00:12:38.426 "superblock": true, 00:12:38.426 "num_base_bdevs": 4, 00:12:38.426 "num_base_bdevs_discovered": 2, 00:12:38.426 "num_base_bdevs_operational": 4, 00:12:38.426 "base_bdevs_list": [ 00:12:38.426 { 00:12:38.426 "name": null, 00:12:38.426 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:38.426 "is_configured": false, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 63488 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": null, 00:12:38.426 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:38.426 "is_configured": false, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 63488 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": "BaseBdev3", 00:12:38.426 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 2048, 00:12:38.426 "data_size": 63488 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": "BaseBdev4", 00:12:38.426 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 2048, 00:12:38.426 "data_size": 63488 00:12:38.426 } 00:12:38.426 ] 00:12:38.426 }' 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.426 10:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.996 [2024-11-20 10:35:42.320958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.996 "name": "Existed_Raid", 00:12:38.996 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:38.996 "strip_size_kb": 0, 00:12:38.996 "state": "configuring", 00:12:38.996 "raid_level": "raid1", 00:12:38.996 "superblock": true, 00:12:38.996 "num_base_bdevs": 4, 00:12:38.996 "num_base_bdevs_discovered": 3, 00:12:38.996 "num_base_bdevs_operational": 4, 00:12:38.996 "base_bdevs_list": [ 00:12:38.996 { 00:12:38.996 "name": null, 00:12:38.996 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:38.996 "is_configured": false, 00:12:38.996 "data_offset": 0, 00:12:38.996 "data_size": 63488 00:12:38.996 }, 00:12:38.996 { 00:12:38.996 "name": "BaseBdev2", 00:12:38.996 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:38.996 "is_configured": true, 00:12:38.996 "data_offset": 2048, 00:12:38.996 "data_size": 63488 00:12:38.996 }, 00:12:38.996 { 00:12:38.996 "name": "BaseBdev3", 00:12:38.996 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:38.996 "is_configured": true, 00:12:38.996 "data_offset": 2048, 00:12:38.996 "data_size": 63488 00:12:38.996 }, 00:12:38.996 { 00:12:38.996 "name": "BaseBdev4", 00:12:38.996 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:38.996 "is_configured": true, 00:12:38.996 "data_offset": 2048, 00:12:38.996 "data_size": 63488 00:12:38.996 } 00:12:38.996 ] 00:12:38.996 }' 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.996 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.566 [2024-11-20 10:35:42.902665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:39.566 [2024-11-20 10:35:42.903005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.566 [2024-11-20 10:35:42.903063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.566 [2024-11-20 10:35:42.903392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:39.566 [2024-11-20 10:35:42.903613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.566 [2024-11-20 10:35:42.903668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:39.566 NewBaseBdev 00:12:39.566 [2024-11-20 10:35:42.903858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.566 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.566 [ 00:12:39.566 { 00:12:39.566 "name": "NewBaseBdev", 00:12:39.566 "aliases": [ 00:12:39.566 "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e" 00:12:39.566 ], 00:12:39.566 "product_name": "Malloc disk", 00:12:39.566 "block_size": 512, 00:12:39.566 "num_blocks": 65536, 00:12:39.566 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:39.566 "assigned_rate_limits": { 00:12:39.566 "rw_ios_per_sec": 0, 00:12:39.566 "rw_mbytes_per_sec": 0, 00:12:39.566 "r_mbytes_per_sec": 0, 00:12:39.566 "w_mbytes_per_sec": 0 00:12:39.566 }, 00:12:39.566 "claimed": true, 00:12:39.566 "claim_type": "exclusive_write", 00:12:39.566 "zoned": false, 00:12:39.566 "supported_io_types": { 00:12:39.566 "read": true, 00:12:39.566 "write": true, 00:12:39.567 "unmap": true, 00:12:39.567 "flush": true, 00:12:39.567 "reset": true, 00:12:39.567 "nvme_admin": false, 00:12:39.567 "nvme_io": false, 00:12:39.567 "nvme_io_md": false, 00:12:39.567 "write_zeroes": true, 00:12:39.567 "zcopy": true, 00:12:39.567 "get_zone_info": false, 00:12:39.567 "zone_management": false, 00:12:39.567 "zone_append": false, 00:12:39.567 "compare": false, 00:12:39.567 "compare_and_write": false, 00:12:39.567 "abort": true, 00:12:39.567 "seek_hole": false, 00:12:39.567 "seek_data": false, 00:12:39.567 "copy": true, 00:12:39.567 "nvme_iov_md": false 00:12:39.567 }, 00:12:39.567 "memory_domains": [ 00:12:39.567 { 00:12:39.567 "dma_device_id": "system", 00:12:39.567 "dma_device_type": 1 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.567 "dma_device_type": 2 00:12:39.567 } 00:12:39.567 ], 00:12:39.567 "driver_specific": {} 00:12:39.567 } 00:12:39.567 ] 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.567 "name": "Existed_Raid", 00:12:39.567 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:39.567 "strip_size_kb": 0, 00:12:39.567 "state": "online", 00:12:39.567 "raid_level": "raid1", 00:12:39.567 "superblock": true, 00:12:39.567 "num_base_bdevs": 4, 00:12:39.567 "num_base_bdevs_discovered": 4, 00:12:39.567 "num_base_bdevs_operational": 4, 00:12:39.567 "base_bdevs_list": [ 00:12:39.567 { 00:12:39.567 "name": "NewBaseBdev", 00:12:39.567 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:39.567 "is_configured": true, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "name": "BaseBdev2", 00:12:39.567 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:39.567 "is_configured": true, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "name": "BaseBdev3", 00:12:39.567 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:39.567 "is_configured": true, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "name": "BaseBdev4", 00:12:39.567 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:39.567 "is_configured": true, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 } 00:12:39.567 ] 00:12:39.567 }' 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.567 10:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.137 [2024-11-20 10:35:43.402505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.137 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.137 "name": "Existed_Raid", 00:12:40.137 "aliases": [ 00:12:40.137 "6a894e3f-2858-498c-85e9-f07561b4cbcf" 00:12:40.137 ], 00:12:40.137 "product_name": "Raid Volume", 00:12:40.137 "block_size": 512, 00:12:40.137 "num_blocks": 63488, 00:12:40.137 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:40.137 "assigned_rate_limits": { 00:12:40.137 "rw_ios_per_sec": 0, 00:12:40.137 "rw_mbytes_per_sec": 0, 00:12:40.137 "r_mbytes_per_sec": 0, 00:12:40.137 "w_mbytes_per_sec": 0 00:12:40.137 }, 00:12:40.137 "claimed": false, 00:12:40.137 "zoned": false, 00:12:40.137 "supported_io_types": { 00:12:40.137 "read": true, 00:12:40.137 "write": true, 00:12:40.137 "unmap": false, 00:12:40.137 "flush": false, 00:12:40.137 "reset": true, 00:12:40.137 "nvme_admin": false, 00:12:40.137 "nvme_io": false, 00:12:40.137 "nvme_io_md": false, 00:12:40.137 "write_zeroes": true, 00:12:40.137 "zcopy": false, 00:12:40.137 "get_zone_info": false, 00:12:40.137 "zone_management": false, 00:12:40.137 "zone_append": false, 00:12:40.137 "compare": false, 00:12:40.137 "compare_and_write": false, 00:12:40.137 "abort": false, 00:12:40.137 "seek_hole": false, 00:12:40.137 "seek_data": false, 00:12:40.137 "copy": false, 00:12:40.137 "nvme_iov_md": false 00:12:40.137 }, 00:12:40.137 "memory_domains": [ 00:12:40.137 { 00:12:40.137 "dma_device_id": "system", 00:12:40.137 "dma_device_type": 1 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.137 "dma_device_type": 2 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "system", 00:12:40.137 "dma_device_type": 1 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.137 "dma_device_type": 2 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "system", 00:12:40.137 "dma_device_type": 1 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.137 "dma_device_type": 2 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "system", 00:12:40.137 "dma_device_type": 1 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.137 "dma_device_type": 2 00:12:40.137 } 00:12:40.137 ], 00:12:40.137 "driver_specific": { 00:12:40.137 "raid": { 00:12:40.137 "uuid": "6a894e3f-2858-498c-85e9-f07561b4cbcf", 00:12:40.137 "strip_size_kb": 0, 00:12:40.137 "state": "online", 00:12:40.137 "raid_level": "raid1", 00:12:40.137 "superblock": true, 00:12:40.137 "num_base_bdevs": 4, 00:12:40.137 "num_base_bdevs_discovered": 4, 00:12:40.137 "num_base_bdevs_operational": 4, 00:12:40.137 "base_bdevs_list": [ 00:12:40.137 { 00:12:40.137 "name": "NewBaseBdev", 00:12:40.137 "uuid": "f2b7fd2f-19d4-4dd5-84e4-f221a0fef49e", 00:12:40.137 "is_configured": true, 00:12:40.137 "data_offset": 2048, 00:12:40.137 "data_size": 63488 00:12:40.137 }, 00:12:40.137 { 00:12:40.137 "name": "BaseBdev2", 00:12:40.137 "uuid": "cdeaf948-6257-4090-8aef-330c843c9776", 00:12:40.137 "is_configured": true, 00:12:40.137 "data_offset": 2048, 00:12:40.137 "data_size": 63488 00:12:40.138 }, 00:12:40.138 { 00:12:40.138 "name": "BaseBdev3", 00:12:40.138 "uuid": "84917326-e06a-42a9-900d-77067c8ef11a", 00:12:40.138 "is_configured": true, 00:12:40.138 "data_offset": 2048, 00:12:40.138 "data_size": 63488 00:12:40.138 }, 00:12:40.138 { 00:12:40.138 "name": "BaseBdev4", 00:12:40.138 "uuid": "eb2e24b8-11f6-4768-9291-2d222a0220ba", 00:12:40.138 "is_configured": true, 00:12:40.138 "data_offset": 2048, 00:12:40.138 "data_size": 63488 00:12:40.138 } 00:12:40.138 ] 00:12:40.138 } 00:12:40.138 } 00:12:40.138 }' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.138 BaseBdev2 00:12:40.138 BaseBdev3 00:12:40.138 BaseBdev4' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.138 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.398 [2024-11-20 10:35:43.729346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.398 [2024-11-20 10:35:43.729436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.398 [2024-11-20 10:35:43.729555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.398 [2024-11-20 10:35:43.729909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.398 [2024-11-20 10:35:43.729974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74033 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74033 ']' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74033 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74033 00:12:40.398 killing process with pid 74033 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74033' 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74033 00:12:40.398 [2024-11-20 10:35:43.763179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.398 10:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74033 00:12:40.967 [2024-11-20 10:35:44.183444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.906 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:41.906 00:12:41.906 real 0m11.865s 00:12:41.906 user 0m18.886s 00:12:41.906 sys 0m2.043s 00:12:41.906 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.906 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.906 ************************************ 00:12:41.906 END TEST raid_state_function_test_sb 00:12:41.906 ************************************ 00:12:42.164 10:35:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:42.164 10:35:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:42.164 10:35:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.164 10:35:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.164 ************************************ 00:12:42.164 START TEST raid_superblock_test 00:12:42.164 ************************************ 00:12:42.164 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:42.164 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74709 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74709 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74709 ']' 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.165 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.165 [2024-11-20 10:35:45.494991] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:42.165 [2024-11-20 10:35:45.495702] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74709 ] 00:12:42.424 [2024-11-20 10:35:45.672240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.424 [2024-11-20 10:35:45.788018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.683 [2024-11-20 10:35:45.987093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.683 [2024-11-20 10:35:45.987221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.943 malloc1 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.943 [2024-11-20 10:35:46.387393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:42.943 [2024-11-20 10:35:46.387526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.943 [2024-11-20 10:35:46.387577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.943 [2024-11-20 10:35:46.387617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.943 [2024-11-20 10:35:46.389812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.943 [2024-11-20 10:35:46.389888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:42.943 pt1 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.943 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 malloc2 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 [2024-11-20 10:35:46.441817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.204 [2024-11-20 10:35:46.441932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.204 [2024-11-20 10:35:46.441976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.204 [2024-11-20 10:35:46.442012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.204 [2024-11-20 10:35:46.444193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.204 [2024-11-20 10:35:46.444268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.204 pt2 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 malloc3 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 [2024-11-20 10:35:46.510131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:43.204 [2024-11-20 10:35:46.510187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.204 [2024-11-20 10:35:46.510209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:43.204 [2024-11-20 10:35:46.510218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.204 [2024-11-20 10:35:46.512545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.204 [2024-11-20 10:35:46.512587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:43.204 pt3 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 malloc4 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.204 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 [2024-11-20 10:35:46.564693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:43.204 [2024-11-20 10:35:46.564796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.205 [2024-11-20 10:35:46.564833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:43.205 [2024-11-20 10:35:46.564862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.205 [2024-11-20 10:35:46.567081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.205 [2024-11-20 10:35:46.567155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:43.205 pt4 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 [2024-11-20 10:35:46.576695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:43.205 [2024-11-20 10:35:46.578525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.205 [2024-11-20 10:35:46.578641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:43.205 [2024-11-20 10:35:46.578702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:43.205 [2024-11-20 10:35:46.578937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.205 [2024-11-20 10:35:46.578988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:43.205 [2024-11-20 10:35:46.579275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:43.205 [2024-11-20 10:35:46.579494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.205 [2024-11-20 10:35:46.579546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:43.205 [2024-11-20 10:35:46.579740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.205 "name": "raid_bdev1", 00:12:43.205 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:43.205 "strip_size_kb": 0, 00:12:43.205 "state": "online", 00:12:43.205 "raid_level": "raid1", 00:12:43.205 "superblock": true, 00:12:43.205 "num_base_bdevs": 4, 00:12:43.205 "num_base_bdevs_discovered": 4, 00:12:43.205 "num_base_bdevs_operational": 4, 00:12:43.205 "base_bdevs_list": [ 00:12:43.205 { 00:12:43.205 "name": "pt1", 00:12:43.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.205 "is_configured": true, 00:12:43.205 "data_offset": 2048, 00:12:43.205 "data_size": 63488 00:12:43.205 }, 00:12:43.205 { 00:12:43.205 "name": "pt2", 00:12:43.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.205 "is_configured": true, 00:12:43.205 "data_offset": 2048, 00:12:43.205 "data_size": 63488 00:12:43.205 }, 00:12:43.205 { 00:12:43.205 "name": "pt3", 00:12:43.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.205 "is_configured": true, 00:12:43.205 "data_offset": 2048, 00:12:43.205 "data_size": 63488 00:12:43.205 }, 00:12:43.205 { 00:12:43.205 "name": "pt4", 00:12:43.205 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.205 "is_configured": true, 00:12:43.205 "data_offset": 2048, 00:12:43.205 "data_size": 63488 00:12:43.205 } 00:12:43.205 ] 00:12:43.205 }' 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.205 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.774 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:43.774 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:43.774 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:43.774 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:43.774 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:43.775 [2024-11-20 10:35:47.092219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:43.775 "name": "raid_bdev1", 00:12:43.775 "aliases": [ 00:12:43.775 "18ea2d04-fa08-4b62-9786-b6da7ef6846f" 00:12:43.775 ], 00:12:43.775 "product_name": "Raid Volume", 00:12:43.775 "block_size": 512, 00:12:43.775 "num_blocks": 63488, 00:12:43.775 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:43.775 "assigned_rate_limits": { 00:12:43.775 "rw_ios_per_sec": 0, 00:12:43.775 "rw_mbytes_per_sec": 0, 00:12:43.775 "r_mbytes_per_sec": 0, 00:12:43.775 "w_mbytes_per_sec": 0 00:12:43.775 }, 00:12:43.775 "claimed": false, 00:12:43.775 "zoned": false, 00:12:43.775 "supported_io_types": { 00:12:43.775 "read": true, 00:12:43.775 "write": true, 00:12:43.775 "unmap": false, 00:12:43.775 "flush": false, 00:12:43.775 "reset": true, 00:12:43.775 "nvme_admin": false, 00:12:43.775 "nvme_io": false, 00:12:43.775 "nvme_io_md": false, 00:12:43.775 "write_zeroes": true, 00:12:43.775 "zcopy": false, 00:12:43.775 "get_zone_info": false, 00:12:43.775 "zone_management": false, 00:12:43.775 "zone_append": false, 00:12:43.775 "compare": false, 00:12:43.775 "compare_and_write": false, 00:12:43.775 "abort": false, 00:12:43.775 "seek_hole": false, 00:12:43.775 "seek_data": false, 00:12:43.775 "copy": false, 00:12:43.775 "nvme_iov_md": false 00:12:43.775 }, 00:12:43.775 "memory_domains": [ 00:12:43.775 { 00:12:43.775 "dma_device_id": "system", 00:12:43.775 "dma_device_type": 1 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.775 "dma_device_type": 2 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "system", 00:12:43.775 "dma_device_type": 1 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.775 "dma_device_type": 2 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "system", 00:12:43.775 "dma_device_type": 1 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.775 "dma_device_type": 2 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "system", 00:12:43.775 "dma_device_type": 1 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.775 "dma_device_type": 2 00:12:43.775 } 00:12:43.775 ], 00:12:43.775 "driver_specific": { 00:12:43.775 "raid": { 00:12:43.775 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:43.775 "strip_size_kb": 0, 00:12:43.775 "state": "online", 00:12:43.775 "raid_level": "raid1", 00:12:43.775 "superblock": true, 00:12:43.775 "num_base_bdevs": 4, 00:12:43.775 "num_base_bdevs_discovered": 4, 00:12:43.775 "num_base_bdevs_operational": 4, 00:12:43.775 "base_bdevs_list": [ 00:12:43.775 { 00:12:43.775 "name": "pt1", 00:12:43.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.775 "is_configured": true, 00:12:43.775 "data_offset": 2048, 00:12:43.775 "data_size": 63488 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "name": "pt2", 00:12:43.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.775 "is_configured": true, 00:12:43.775 "data_offset": 2048, 00:12:43.775 "data_size": 63488 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "name": "pt3", 00:12:43.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.775 "is_configured": true, 00:12:43.775 "data_offset": 2048, 00:12:43.775 "data_size": 63488 00:12:43.775 }, 00:12:43.775 { 00:12:43.775 "name": "pt4", 00:12:43.775 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.775 "is_configured": true, 00:12:43.775 "data_offset": 2048, 00:12:43.775 "data_size": 63488 00:12:43.775 } 00:12:43.775 ] 00:12:43.775 } 00:12:43.775 } 00:12:43.775 }' 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:43.775 pt2 00:12:43.775 pt3 00:12:43.775 pt4' 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.775 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.035 [2024-11-20 10:35:47.407672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=18ea2d04-fa08-4b62-9786-b6da7ef6846f 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 18ea2d04-fa08-4b62-9786-b6da7ef6846f ']' 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.035 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.035 [2024-11-20 10:35:47.451246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.035 [2024-11-20 10:35:47.451274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.035 [2024-11-20 10:35:47.451382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.035 [2024-11-20 10:35:47.451469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.036 [2024-11-20 10:35:47.451484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.036 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:44.296 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 [2024-11-20 10:35:47.615015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:44.297 [2024-11-20 10:35:47.617463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:44.297 [2024-11-20 10:35:47.617617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:44.297 [2024-11-20 10:35:47.617678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:44.297 [2024-11-20 10:35:47.617764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:44.297 [2024-11-20 10:35:47.617843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:44.297 [2024-11-20 10:35:47.617875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:44.297 [2024-11-20 10:35:47.617904] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:44.297 [2024-11-20 10:35:47.617924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.297 [2024-11-20 10:35:47.617942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:44.297 request: 00:12:44.297 { 00:12:44.297 "name": "raid_bdev1", 00:12:44.297 "raid_level": "raid1", 00:12:44.297 "base_bdevs": [ 00:12:44.297 "malloc1", 00:12:44.297 "malloc2", 00:12:44.297 "malloc3", 00:12:44.297 "malloc4" 00:12:44.297 ], 00:12:44.297 "superblock": false, 00:12:44.297 "method": "bdev_raid_create", 00:12:44.297 "req_id": 1 00:12:44.297 } 00:12:44.297 Got JSON-RPC error response 00:12:44.297 response: 00:12:44.297 { 00:12:44.297 "code": -17, 00:12:44.297 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:44.297 } 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 [2024-11-20 10:35:47.682869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:44.297 [2024-11-20 10:35:47.683000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.297 [2024-11-20 10:35:47.683047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.297 [2024-11-20 10:35:47.683086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.297 [2024-11-20 10:35:47.685520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.297 [2024-11-20 10:35:47.685623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:44.297 [2024-11-20 10:35:47.685758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:44.297 [2024-11-20 10:35:47.685865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:44.297 pt1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.297 "name": "raid_bdev1", 00:12:44.297 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:44.297 "strip_size_kb": 0, 00:12:44.297 "state": "configuring", 00:12:44.297 "raid_level": "raid1", 00:12:44.297 "superblock": true, 00:12:44.297 "num_base_bdevs": 4, 00:12:44.297 "num_base_bdevs_discovered": 1, 00:12:44.297 "num_base_bdevs_operational": 4, 00:12:44.297 "base_bdevs_list": [ 00:12:44.297 { 00:12:44.297 "name": "pt1", 00:12:44.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.297 "is_configured": true, 00:12:44.297 "data_offset": 2048, 00:12:44.297 "data_size": 63488 00:12:44.297 }, 00:12:44.297 { 00:12:44.297 "name": null, 00:12:44.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.297 "is_configured": false, 00:12:44.297 "data_offset": 2048, 00:12:44.297 "data_size": 63488 00:12:44.297 }, 00:12:44.297 { 00:12:44.297 "name": null, 00:12:44.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.297 "is_configured": false, 00:12:44.297 "data_offset": 2048, 00:12:44.297 "data_size": 63488 00:12:44.297 }, 00:12:44.297 { 00:12:44.297 "name": null, 00:12:44.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.297 "is_configured": false, 00:12:44.297 "data_offset": 2048, 00:12:44.297 "data_size": 63488 00:12:44.297 } 00:12:44.297 ] 00:12:44.297 }' 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.297 10:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.867 [2024-11-20 10:35:48.130148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:44.867 [2024-11-20 10:35:48.130279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.867 [2024-11-20 10:35:48.130306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:44.867 [2024-11-20 10:35:48.130319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.867 [2024-11-20 10:35:48.130810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.867 [2024-11-20 10:35:48.130834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:44.867 [2024-11-20 10:35:48.130922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:44.867 [2024-11-20 10:35:48.130955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:44.867 pt2 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.867 [2024-11-20 10:35:48.138176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.867 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.868 "name": "raid_bdev1", 00:12:44.868 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:44.868 "strip_size_kb": 0, 00:12:44.868 "state": "configuring", 00:12:44.868 "raid_level": "raid1", 00:12:44.868 "superblock": true, 00:12:44.868 "num_base_bdevs": 4, 00:12:44.868 "num_base_bdevs_discovered": 1, 00:12:44.868 "num_base_bdevs_operational": 4, 00:12:44.868 "base_bdevs_list": [ 00:12:44.868 { 00:12:44.868 "name": "pt1", 00:12:44.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.868 "is_configured": true, 00:12:44.868 "data_offset": 2048, 00:12:44.868 "data_size": 63488 00:12:44.868 }, 00:12:44.868 { 00:12:44.868 "name": null, 00:12:44.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.868 "is_configured": false, 00:12:44.868 "data_offset": 0, 00:12:44.868 "data_size": 63488 00:12:44.868 }, 00:12:44.868 { 00:12:44.868 "name": null, 00:12:44.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.868 "is_configured": false, 00:12:44.868 "data_offset": 2048, 00:12:44.868 "data_size": 63488 00:12:44.868 }, 00:12:44.868 { 00:12:44.868 "name": null, 00:12:44.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.868 "is_configured": false, 00:12:44.868 "data_offset": 2048, 00:12:44.868 "data_size": 63488 00:12:44.868 } 00:12:44.868 ] 00:12:44.868 }' 00:12:44.868 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.868 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.128 [2024-11-20 10:35:48.569406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.128 [2024-11-20 10:35:48.569476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.128 [2024-11-20 10:35:48.569503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:45.128 [2024-11-20 10:35:48.569514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.128 [2024-11-20 10:35:48.569966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.128 [2024-11-20 10:35:48.569984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.128 [2024-11-20 10:35:48.570074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:45.128 [2024-11-20 10:35:48.570097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.128 pt2 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.128 [2024-11-20 10:35:48.577345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:45.128 [2024-11-20 10:35:48.577416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.128 [2024-11-20 10:35:48.577454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:45.128 [2024-11-20 10:35:48.577463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.128 [2024-11-20 10:35:48.577874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.128 [2024-11-20 10:35:48.577898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:45.128 [2024-11-20 10:35:48.577972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:45.128 [2024-11-20 10:35:48.577993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:45.128 pt3 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.128 [2024-11-20 10:35:48.585303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:45.128 [2024-11-20 10:35:48.585349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.128 [2024-11-20 10:35:48.585381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:45.128 [2024-11-20 10:35:48.585389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.128 [2024-11-20 10:35:48.585763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.128 [2024-11-20 10:35:48.585786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:45.128 [2024-11-20 10:35:48.585851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:45.128 [2024-11-20 10:35:48.585870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:45.128 [2024-11-20 10:35:48.586025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:45.128 [2024-11-20 10:35:48.586035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.128 [2024-11-20 10:35:48.586291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:45.128 [2024-11-20 10:35:48.586483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:45.128 [2024-11-20 10:35:48.586499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:45.128 [2024-11-20 10:35:48.586647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.128 pt4 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.128 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.388 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.388 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.388 "name": "raid_bdev1", 00:12:45.388 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:45.388 "strip_size_kb": 0, 00:12:45.388 "state": "online", 00:12:45.388 "raid_level": "raid1", 00:12:45.388 "superblock": true, 00:12:45.388 "num_base_bdevs": 4, 00:12:45.389 "num_base_bdevs_discovered": 4, 00:12:45.389 "num_base_bdevs_operational": 4, 00:12:45.389 "base_bdevs_list": [ 00:12:45.389 { 00:12:45.389 "name": "pt1", 00:12:45.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.389 "is_configured": true, 00:12:45.389 "data_offset": 2048, 00:12:45.389 "data_size": 63488 00:12:45.389 }, 00:12:45.389 { 00:12:45.389 "name": "pt2", 00:12:45.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.389 "is_configured": true, 00:12:45.389 "data_offset": 2048, 00:12:45.389 "data_size": 63488 00:12:45.389 }, 00:12:45.389 { 00:12:45.389 "name": "pt3", 00:12:45.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.389 "is_configured": true, 00:12:45.389 "data_offset": 2048, 00:12:45.389 "data_size": 63488 00:12:45.389 }, 00:12:45.389 { 00:12:45.389 "name": "pt4", 00:12:45.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.389 "is_configured": true, 00:12:45.389 "data_offset": 2048, 00:12:45.389 "data_size": 63488 00:12:45.389 } 00:12:45.389 ] 00:12:45.389 }' 00:12:45.389 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.389 10:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.648 [2024-11-20 10:35:49.037081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.648 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.648 "name": "raid_bdev1", 00:12:45.649 "aliases": [ 00:12:45.649 "18ea2d04-fa08-4b62-9786-b6da7ef6846f" 00:12:45.649 ], 00:12:45.649 "product_name": "Raid Volume", 00:12:45.649 "block_size": 512, 00:12:45.649 "num_blocks": 63488, 00:12:45.649 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:45.649 "assigned_rate_limits": { 00:12:45.649 "rw_ios_per_sec": 0, 00:12:45.649 "rw_mbytes_per_sec": 0, 00:12:45.649 "r_mbytes_per_sec": 0, 00:12:45.649 "w_mbytes_per_sec": 0 00:12:45.649 }, 00:12:45.649 "claimed": false, 00:12:45.649 "zoned": false, 00:12:45.649 "supported_io_types": { 00:12:45.649 "read": true, 00:12:45.649 "write": true, 00:12:45.649 "unmap": false, 00:12:45.649 "flush": false, 00:12:45.649 "reset": true, 00:12:45.649 "nvme_admin": false, 00:12:45.649 "nvme_io": false, 00:12:45.649 "nvme_io_md": false, 00:12:45.649 "write_zeroes": true, 00:12:45.649 "zcopy": false, 00:12:45.649 "get_zone_info": false, 00:12:45.649 "zone_management": false, 00:12:45.649 "zone_append": false, 00:12:45.649 "compare": false, 00:12:45.649 "compare_and_write": false, 00:12:45.649 "abort": false, 00:12:45.649 "seek_hole": false, 00:12:45.649 "seek_data": false, 00:12:45.649 "copy": false, 00:12:45.649 "nvme_iov_md": false 00:12:45.649 }, 00:12:45.649 "memory_domains": [ 00:12:45.649 { 00:12:45.649 "dma_device_id": "system", 00:12:45.649 "dma_device_type": 1 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.649 "dma_device_type": 2 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "system", 00:12:45.649 "dma_device_type": 1 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.649 "dma_device_type": 2 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "system", 00:12:45.649 "dma_device_type": 1 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.649 "dma_device_type": 2 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "system", 00:12:45.649 "dma_device_type": 1 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.649 "dma_device_type": 2 00:12:45.649 } 00:12:45.649 ], 00:12:45.649 "driver_specific": { 00:12:45.649 "raid": { 00:12:45.649 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:45.649 "strip_size_kb": 0, 00:12:45.649 "state": "online", 00:12:45.649 "raid_level": "raid1", 00:12:45.649 "superblock": true, 00:12:45.649 "num_base_bdevs": 4, 00:12:45.649 "num_base_bdevs_discovered": 4, 00:12:45.649 "num_base_bdevs_operational": 4, 00:12:45.649 "base_bdevs_list": [ 00:12:45.649 { 00:12:45.649 "name": "pt1", 00:12:45.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.649 "is_configured": true, 00:12:45.649 "data_offset": 2048, 00:12:45.649 "data_size": 63488 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "name": "pt2", 00:12:45.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.649 "is_configured": true, 00:12:45.649 "data_offset": 2048, 00:12:45.649 "data_size": 63488 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "name": "pt3", 00:12:45.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.649 "is_configured": true, 00:12:45.649 "data_offset": 2048, 00:12:45.649 "data_size": 63488 00:12:45.649 }, 00:12:45.649 { 00:12:45.649 "name": "pt4", 00:12:45.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.649 "is_configured": true, 00:12:45.649 "data_offset": 2048, 00:12:45.649 "data_size": 63488 00:12:45.649 } 00:12:45.649 ] 00:12:45.649 } 00:12:45.649 } 00:12:45.649 }' 00:12:45.649 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.649 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:45.649 pt2 00:12:45.649 pt3 00:12:45.649 pt4' 00:12:45.649 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.910 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.911 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.173 [2024-11-20 10:35:49.392393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 18ea2d04-fa08-4b62-9786-b6da7ef6846f '!=' 18ea2d04-fa08-4b62-9786-b6da7ef6846f ']' 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.173 [2024-11-20 10:35:49.436008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.173 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.174 "name": "raid_bdev1", 00:12:46.174 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:46.174 "strip_size_kb": 0, 00:12:46.174 "state": "online", 00:12:46.174 "raid_level": "raid1", 00:12:46.174 "superblock": true, 00:12:46.174 "num_base_bdevs": 4, 00:12:46.174 "num_base_bdevs_discovered": 3, 00:12:46.174 "num_base_bdevs_operational": 3, 00:12:46.174 "base_bdevs_list": [ 00:12:46.174 { 00:12:46.174 "name": null, 00:12:46.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.174 "is_configured": false, 00:12:46.174 "data_offset": 0, 00:12:46.174 "data_size": 63488 00:12:46.174 }, 00:12:46.174 { 00:12:46.174 "name": "pt2", 00:12:46.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.174 "is_configured": true, 00:12:46.174 "data_offset": 2048, 00:12:46.174 "data_size": 63488 00:12:46.174 }, 00:12:46.174 { 00:12:46.174 "name": "pt3", 00:12:46.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.174 "is_configured": true, 00:12:46.174 "data_offset": 2048, 00:12:46.174 "data_size": 63488 00:12:46.174 }, 00:12:46.174 { 00:12:46.174 "name": "pt4", 00:12:46.174 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.174 "is_configured": true, 00:12:46.174 "data_offset": 2048, 00:12:46.174 "data_size": 63488 00:12:46.174 } 00:12:46.174 ] 00:12:46.174 }' 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.174 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.442 [2024-11-20 10:35:49.871292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.442 [2024-11-20 10:35:49.871407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.442 [2024-11-20 10:35:49.871525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.442 [2024-11-20 10:35:49.871651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.442 [2024-11-20 10:35:49.871724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.442 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:46.717 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.718 [2024-11-20 10:35:49.967101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:46.718 [2024-11-20 10:35:49.967166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.718 [2024-11-20 10:35:49.967186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:46.718 [2024-11-20 10:35:49.967197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.718 [2024-11-20 10:35:49.969700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.718 [2024-11-20 10:35:49.969786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:46.718 [2024-11-20 10:35:49.969905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:46.718 [2024-11-20 10:35:49.969963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:46.718 pt2 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.718 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.718 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.718 "name": "raid_bdev1", 00:12:46.718 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:46.718 "strip_size_kb": 0, 00:12:46.718 "state": "configuring", 00:12:46.718 "raid_level": "raid1", 00:12:46.718 "superblock": true, 00:12:46.718 "num_base_bdevs": 4, 00:12:46.718 "num_base_bdevs_discovered": 1, 00:12:46.718 "num_base_bdevs_operational": 3, 00:12:46.718 "base_bdevs_list": [ 00:12:46.718 { 00:12:46.718 "name": null, 00:12:46.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.718 "is_configured": false, 00:12:46.718 "data_offset": 2048, 00:12:46.718 "data_size": 63488 00:12:46.718 }, 00:12:46.718 { 00:12:46.718 "name": "pt2", 00:12:46.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.718 "is_configured": true, 00:12:46.718 "data_offset": 2048, 00:12:46.718 "data_size": 63488 00:12:46.718 }, 00:12:46.718 { 00:12:46.718 "name": null, 00:12:46.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.718 "is_configured": false, 00:12:46.718 "data_offset": 2048, 00:12:46.718 "data_size": 63488 00:12:46.718 }, 00:12:46.718 { 00:12:46.718 "name": null, 00:12:46.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.718 "is_configured": false, 00:12:46.718 "data_offset": 2048, 00:12:46.718 "data_size": 63488 00:12:46.718 } 00:12:46.718 ] 00:12:46.718 }' 00:12:46.718 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.718 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.977 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:46.977 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:46.977 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:46.977 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.977 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.237 [2024-11-20 10:35:50.454322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.237 [2024-11-20 10:35:50.454481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.237 [2024-11-20 10:35:50.454525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:47.237 [2024-11-20 10:35:50.454555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.237 [2024-11-20 10:35:50.455068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.237 [2024-11-20 10:35:50.455134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.237 [2024-11-20 10:35:50.455254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:47.237 [2024-11-20 10:35:50.455305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.237 pt3 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.237 "name": "raid_bdev1", 00:12:47.237 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:47.237 "strip_size_kb": 0, 00:12:47.237 "state": "configuring", 00:12:47.237 "raid_level": "raid1", 00:12:47.237 "superblock": true, 00:12:47.237 "num_base_bdevs": 4, 00:12:47.237 "num_base_bdevs_discovered": 2, 00:12:47.237 "num_base_bdevs_operational": 3, 00:12:47.237 "base_bdevs_list": [ 00:12:47.237 { 00:12:47.237 "name": null, 00:12:47.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.237 "is_configured": false, 00:12:47.237 "data_offset": 2048, 00:12:47.237 "data_size": 63488 00:12:47.237 }, 00:12:47.237 { 00:12:47.237 "name": "pt2", 00:12:47.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.237 "is_configured": true, 00:12:47.237 "data_offset": 2048, 00:12:47.237 "data_size": 63488 00:12:47.237 }, 00:12:47.237 { 00:12:47.237 "name": "pt3", 00:12:47.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.237 "is_configured": true, 00:12:47.237 "data_offset": 2048, 00:12:47.237 "data_size": 63488 00:12:47.237 }, 00:12:47.237 { 00:12:47.237 "name": null, 00:12:47.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.237 "is_configured": false, 00:12:47.237 "data_offset": 2048, 00:12:47.237 "data_size": 63488 00:12:47.237 } 00:12:47.237 ] 00:12:47.237 }' 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.237 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.496 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:47.496 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:47.496 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:47.496 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:47.496 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.497 [2024-11-20 10:35:50.917576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:47.497 [2024-11-20 10:35:50.917658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.497 [2024-11-20 10:35:50.917682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:47.497 [2024-11-20 10:35:50.917693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.497 [2024-11-20 10:35:50.918163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.497 [2024-11-20 10:35:50.918197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:47.497 [2024-11-20 10:35:50.918290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:47.497 [2024-11-20 10:35:50.918321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:47.497 [2024-11-20 10:35:50.918524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:47.497 [2024-11-20 10:35:50.918589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.497 [2024-11-20 10:35:50.918871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:47.497 [2024-11-20 10:35:50.919030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:47.497 [2024-11-20 10:35:50.919045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:47.497 [2024-11-20 10:35:50.919190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.497 pt4 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.497 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.756 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.756 "name": "raid_bdev1", 00:12:47.756 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:47.756 "strip_size_kb": 0, 00:12:47.756 "state": "online", 00:12:47.756 "raid_level": "raid1", 00:12:47.756 "superblock": true, 00:12:47.756 "num_base_bdevs": 4, 00:12:47.756 "num_base_bdevs_discovered": 3, 00:12:47.756 "num_base_bdevs_operational": 3, 00:12:47.756 "base_bdevs_list": [ 00:12:47.756 { 00:12:47.756 "name": null, 00:12:47.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.756 "is_configured": false, 00:12:47.756 "data_offset": 2048, 00:12:47.756 "data_size": 63488 00:12:47.756 }, 00:12:47.756 { 00:12:47.756 "name": "pt2", 00:12:47.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.756 "is_configured": true, 00:12:47.756 "data_offset": 2048, 00:12:47.756 "data_size": 63488 00:12:47.756 }, 00:12:47.756 { 00:12:47.756 "name": "pt3", 00:12:47.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.756 "is_configured": true, 00:12:47.756 "data_offset": 2048, 00:12:47.756 "data_size": 63488 00:12:47.756 }, 00:12:47.756 { 00:12:47.756 "name": "pt4", 00:12:47.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.756 "is_configured": true, 00:12:47.756 "data_offset": 2048, 00:12:47.756 "data_size": 63488 00:12:47.756 } 00:12:47.756 ] 00:12:47.756 }' 00:12:47.756 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.756 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 [2024-11-20 10:35:51.376741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.016 [2024-11-20 10:35:51.376865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.016 [2024-11-20 10:35:51.376979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.016 [2024-11-20 10:35:51.377073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.016 [2024-11-20 10:35:51.377140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 [2024-11-20 10:35:51.432610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:48.016 [2024-11-20 10:35:51.432725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.016 [2024-11-20 10:35:51.432762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:48.016 [2024-11-20 10:35:51.432795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.016 [2024-11-20 10:35:51.435022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.016 [2024-11-20 10:35:51.435100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:48.016 [2024-11-20 10:35:51.435224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:48.016 [2024-11-20 10:35:51.435289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:48.016 [2024-11-20 10:35:51.435456] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:48.016 [2024-11-20 10:35:51.435513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.016 [2024-11-20 10:35:51.435548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:48.016 [2024-11-20 10:35:51.435661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:48.016 [2024-11-20 10:35:51.435811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:48.016 pt1 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.276 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.276 "name": "raid_bdev1", 00:12:48.276 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:48.276 "strip_size_kb": 0, 00:12:48.276 "state": "configuring", 00:12:48.276 "raid_level": "raid1", 00:12:48.276 "superblock": true, 00:12:48.276 "num_base_bdevs": 4, 00:12:48.276 "num_base_bdevs_discovered": 2, 00:12:48.276 "num_base_bdevs_operational": 3, 00:12:48.276 "base_bdevs_list": [ 00:12:48.276 { 00:12:48.276 "name": null, 00:12:48.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.276 "is_configured": false, 00:12:48.276 "data_offset": 2048, 00:12:48.276 "data_size": 63488 00:12:48.276 }, 00:12:48.276 { 00:12:48.276 "name": "pt2", 00:12:48.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.276 "is_configured": true, 00:12:48.276 "data_offset": 2048, 00:12:48.276 "data_size": 63488 00:12:48.276 }, 00:12:48.276 { 00:12:48.276 "name": "pt3", 00:12:48.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.276 "is_configured": true, 00:12:48.276 "data_offset": 2048, 00:12:48.276 "data_size": 63488 00:12:48.276 }, 00:12:48.276 { 00:12:48.276 "name": null, 00:12:48.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:48.276 "is_configured": false, 00:12:48.276 "data_offset": 2048, 00:12:48.276 "data_size": 63488 00:12:48.276 } 00:12:48.276 ] 00:12:48.276 }' 00:12:48.276 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.276 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.536 [2024-11-20 10:35:51.927802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:48.536 [2024-11-20 10:35:51.927912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.536 [2024-11-20 10:35:51.927939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:48.536 [2024-11-20 10:35:51.927949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.536 [2024-11-20 10:35:51.928425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.536 [2024-11-20 10:35:51.928452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:48.536 [2024-11-20 10:35:51.928542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:48.536 [2024-11-20 10:35:51.928573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:48.536 [2024-11-20 10:35:51.928710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:48.536 [2024-11-20 10:35:51.928724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:48.536 [2024-11-20 10:35:51.928972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:48.536 [2024-11-20 10:35:51.929128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:48.536 [2024-11-20 10:35:51.929140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:48.536 [2024-11-20 10:35:51.929284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.536 pt4 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.536 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.537 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.537 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.537 "name": "raid_bdev1", 00:12:48.537 "uuid": "18ea2d04-fa08-4b62-9786-b6da7ef6846f", 00:12:48.537 "strip_size_kb": 0, 00:12:48.537 "state": "online", 00:12:48.537 "raid_level": "raid1", 00:12:48.537 "superblock": true, 00:12:48.537 "num_base_bdevs": 4, 00:12:48.537 "num_base_bdevs_discovered": 3, 00:12:48.537 "num_base_bdevs_operational": 3, 00:12:48.537 "base_bdevs_list": [ 00:12:48.537 { 00:12:48.537 "name": null, 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.537 "is_configured": false, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 }, 00:12:48.537 { 00:12:48.537 "name": "pt2", 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.537 "is_configured": true, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 }, 00:12:48.537 { 00:12:48.537 "name": "pt3", 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.537 "is_configured": true, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 }, 00:12:48.537 { 00:12:48.537 "name": "pt4", 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:48.537 "is_configured": true, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 } 00:12:48.537 ] 00:12:48.537 }' 00:12:48.537 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.537 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.105 [2024-11-20 10:35:52.431283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 18ea2d04-fa08-4b62-9786-b6da7ef6846f '!=' 18ea2d04-fa08-4b62-9786-b6da7ef6846f ']' 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74709 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74709 ']' 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74709 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:49.105 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.106 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74709 00:12:49.106 killing process with pid 74709 00:12:49.106 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.106 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.106 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74709' 00:12:49.106 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74709 00:12:49.106 [2024-11-20 10:35:52.514629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.106 [2024-11-20 10:35:52.514746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.106 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74709 00:12:49.106 [2024-11-20 10:35:52.514856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.106 [2024-11-20 10:35:52.514875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:49.690 [2024-11-20 10:35:52.925016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.627 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:50.627 00:12:50.627 real 0m8.663s 00:12:50.627 user 0m13.610s 00:12:50.627 sys 0m1.588s 00:12:50.627 ************************************ 00:12:50.627 END TEST raid_superblock_test 00:12:50.627 ************************************ 00:12:50.627 10:35:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.628 10:35:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.887 10:35:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:50.887 10:35:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:50.887 10:35:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.887 10:35:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.887 ************************************ 00:12:50.887 START TEST raid_read_error_test 00:12:50.887 ************************************ 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MI9mTaCiUF 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75197 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75197 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75197 ']' 00:12:50.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.887 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.887 [2024-11-20 10:35:54.255146] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:50.887 [2024-11-20 10:35:54.255282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75197 ] 00:12:51.147 [2024-11-20 10:35:54.416038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.147 [2024-11-20 10:35:54.540076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.406 [2024-11-20 10:35:54.750157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.406 [2024-11-20 10:35:54.750296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.665 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.666 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:51.666 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.666 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.666 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.666 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 BaseBdev1_malloc 00:12:51.925 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.925 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:51.925 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.925 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 true 00:12:51.925 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.925 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 [2024-11-20 10:35:55.187037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:51.926 [2024-11-20 10:35:55.187110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.926 [2024-11-20 10:35:55.187136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:51.926 [2024-11-20 10:35:55.187149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.926 [2024-11-20 10:35:55.189497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.926 [2024-11-20 10:35:55.189542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.926 BaseBdev1 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 BaseBdev2_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 true 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 [2024-11-20 10:35:55.258998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:51.926 [2024-11-20 10:35:55.259083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.926 [2024-11-20 10:35:55.259104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:51.926 [2024-11-20 10:35:55.259116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.926 [2024-11-20 10:35:55.261606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.926 [2024-11-20 10:35:55.261704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.926 BaseBdev2 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 BaseBdev3_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 true 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 [2024-11-20 10:35:55.340264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:51.926 [2024-11-20 10:35:55.340328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.926 [2024-11-20 10:35:55.340390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:51.926 [2024-11-20 10:35:55.340405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.926 [2024-11-20 10:35:55.342859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.926 [2024-11-20 10:35:55.342901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:51.926 BaseBdev3 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.926 BaseBdev4_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.926 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 true 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 [2024-11-20 10:35:55.407861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:52.186 [2024-11-20 10:35:55.407920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.186 [2024-11-20 10:35:55.407958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:52.186 [2024-11-20 10:35:55.407970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.186 [2024-11-20 10:35:55.410276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.186 [2024-11-20 10:35:55.410363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:52.186 BaseBdev4 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 [2024-11-20 10:35:55.419920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.186 [2024-11-20 10:35:55.421854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.186 [2024-11-20 10:35:55.421926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.186 [2024-11-20 10:35:55.421990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:52.186 [2024-11-20 10:35:55.422213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:52.186 [2024-11-20 10:35:55.422226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.186 [2024-11-20 10:35:55.422481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:52.186 [2024-11-20 10:35:55.422656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:52.186 [2024-11-20 10:35:55.422666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:52.186 [2024-11-20 10:35:55.422851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.186 "name": "raid_bdev1", 00:12:52.186 "uuid": "073899cc-9b57-488a-8f10-a77df8caf2ef", 00:12:52.186 "strip_size_kb": 0, 00:12:52.186 "state": "online", 00:12:52.186 "raid_level": "raid1", 00:12:52.186 "superblock": true, 00:12:52.186 "num_base_bdevs": 4, 00:12:52.186 "num_base_bdevs_discovered": 4, 00:12:52.186 "num_base_bdevs_operational": 4, 00:12:52.186 "base_bdevs_list": [ 00:12:52.186 { 00:12:52.186 "name": "BaseBdev1", 00:12:52.186 "uuid": "8d64d098-7043-5f28-aba2-53780f4ad88d", 00:12:52.186 "is_configured": true, 00:12:52.186 "data_offset": 2048, 00:12:52.186 "data_size": 63488 00:12:52.186 }, 00:12:52.186 { 00:12:52.186 "name": "BaseBdev2", 00:12:52.186 "uuid": "88eb5d0d-1714-5a39-86ac-772d34ea9122", 00:12:52.186 "is_configured": true, 00:12:52.186 "data_offset": 2048, 00:12:52.186 "data_size": 63488 00:12:52.186 }, 00:12:52.186 { 00:12:52.186 "name": "BaseBdev3", 00:12:52.186 "uuid": "bb57fc95-8485-5a18-85c7-886693388224", 00:12:52.186 "is_configured": true, 00:12:52.186 "data_offset": 2048, 00:12:52.186 "data_size": 63488 00:12:52.186 }, 00:12:52.186 { 00:12:52.186 "name": "BaseBdev4", 00:12:52.186 "uuid": "5acc3cf8-0484-5597-93f6-9e475733596d", 00:12:52.186 "is_configured": true, 00:12:52.186 "data_offset": 2048, 00:12:52.186 "data_size": 63488 00:12:52.186 } 00:12:52.186 ] 00:12:52.186 }' 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.186 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.446 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:52.446 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:52.707 [2024-11-20 10:35:55.960531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.647 "name": "raid_bdev1", 00:12:53.647 "uuid": "073899cc-9b57-488a-8f10-a77df8caf2ef", 00:12:53.647 "strip_size_kb": 0, 00:12:53.647 "state": "online", 00:12:53.647 "raid_level": "raid1", 00:12:53.647 "superblock": true, 00:12:53.647 "num_base_bdevs": 4, 00:12:53.647 "num_base_bdevs_discovered": 4, 00:12:53.647 "num_base_bdevs_operational": 4, 00:12:53.647 "base_bdevs_list": [ 00:12:53.647 { 00:12:53.647 "name": "BaseBdev1", 00:12:53.647 "uuid": "8d64d098-7043-5f28-aba2-53780f4ad88d", 00:12:53.647 "is_configured": true, 00:12:53.647 "data_offset": 2048, 00:12:53.647 "data_size": 63488 00:12:53.647 }, 00:12:53.647 { 00:12:53.647 "name": "BaseBdev2", 00:12:53.647 "uuid": "88eb5d0d-1714-5a39-86ac-772d34ea9122", 00:12:53.647 "is_configured": true, 00:12:53.647 "data_offset": 2048, 00:12:53.647 "data_size": 63488 00:12:53.647 }, 00:12:53.647 { 00:12:53.647 "name": "BaseBdev3", 00:12:53.647 "uuid": "bb57fc95-8485-5a18-85c7-886693388224", 00:12:53.647 "is_configured": true, 00:12:53.647 "data_offset": 2048, 00:12:53.647 "data_size": 63488 00:12:53.647 }, 00:12:53.647 { 00:12:53.647 "name": "BaseBdev4", 00:12:53.647 "uuid": "5acc3cf8-0484-5597-93f6-9e475733596d", 00:12:53.647 "is_configured": true, 00:12:53.647 "data_offset": 2048, 00:12:53.647 "data_size": 63488 00:12:53.647 } 00:12:53.647 ] 00:12:53.647 }' 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.647 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.908 [2024-11-20 10:35:57.357839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.908 [2024-11-20 10:35:57.357875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.908 [2024-11-20 10:35:57.361074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.908 [2024-11-20 10:35:57.361137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.908 [2024-11-20 10:35:57.361290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.908 [2024-11-20 10:35:57.361304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:53.908 { 00:12:53.908 "results": [ 00:12:53.908 { 00:12:53.908 "job": "raid_bdev1", 00:12:53.908 "core_mask": "0x1", 00:12:53.908 "workload": "randrw", 00:12:53.908 "percentage": 50, 00:12:53.908 "status": "finished", 00:12:53.908 "queue_depth": 1, 00:12:53.908 "io_size": 131072, 00:12:53.908 "runtime": 1.398129, 00:12:53.908 "iops": 9749.458025690048, 00:12:53.908 "mibps": 1218.682253211256, 00:12:53.908 "io_failed": 0, 00:12:53.908 "io_timeout": 0, 00:12:53.908 "avg_latency_us": 99.5455519287368, 00:12:53.908 "min_latency_us": 24.593886462882097, 00:12:53.908 "max_latency_us": 1731.4096069868995 00:12:53.908 } 00:12:53.908 ], 00:12:53.908 "core_count": 1 00:12:53.908 } 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75197 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75197 ']' 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75197 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.908 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75197 00:12:54.168 killing process with pid 75197 00:12:54.168 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.168 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.168 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75197' 00:12:54.168 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75197 00:12:54.168 [2024-11-20 10:35:57.404625] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.168 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75197 00:12:54.427 [2024-11-20 10:35:57.749385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MI9mTaCiUF 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:55.812 00:12:55.812 real 0m4.816s 00:12:55.812 user 0m5.704s 00:12:55.812 sys 0m0.590s 00:12:55.812 ************************************ 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.812 10:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.812 END TEST raid_read_error_test 00:12:55.812 ************************************ 00:12:55.812 10:35:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:55.812 10:35:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:55.812 10:35:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.812 10:35:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.812 ************************************ 00:12:55.812 START TEST raid_write_error_test 00:12:55.812 ************************************ 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3VTiAAlA3L 00:12:55.812 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75343 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75343 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75343 ']' 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.813 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.813 [2024-11-20 10:35:59.136795] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:55.813 [2024-11-20 10:35:59.137337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75343 ] 00:12:56.072 [2024-11-20 10:35:59.293731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.072 [2024-11-20 10:35:59.410361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.331 [2024-11-20 10:35:59.610681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.331 [2024-11-20 10:35:59.610816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.590 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.590 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:56.590 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.590 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.590 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.590 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.590 BaseBdev1_malloc 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.590 true 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.590 [2024-11-20 10:36:00.033906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:56.590 [2024-11-20 10:36:00.033962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.590 [2024-11-20 10:36:00.033980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:56.590 [2024-11-20 10:36:00.033990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.590 [2024-11-20 10:36:00.035974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.590 [2024-11-20 10:36:00.036018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.590 BaseBdev1 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.590 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 BaseBdev2_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 true 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 [2024-11-20 10:36:00.102243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:56.850 [2024-11-20 10:36:00.102297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.850 [2024-11-20 10:36:00.102314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:56.850 [2024-11-20 10:36:00.102324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.850 [2024-11-20 10:36:00.104399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.850 [2024-11-20 10:36:00.104489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.850 BaseBdev2 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 BaseBdev3_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 true 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 [2024-11-20 10:36:00.182008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:56.850 [2024-11-20 10:36:00.182058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.850 [2024-11-20 10:36:00.182074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:56.850 [2024-11-20 10:36:00.182084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.850 [2024-11-20 10:36:00.184159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.850 [2024-11-20 10:36:00.184200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:56.850 BaseBdev3 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 BaseBdev4_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 true 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 [2024-11-20 10:36:00.250956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:56.850 [2024-11-20 10:36:00.251011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.850 [2024-11-20 10:36:00.251029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:56.850 [2024-11-20 10:36:00.251039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.850 [2024-11-20 10:36:00.253279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.850 [2024-11-20 10:36:00.253323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:56.850 BaseBdev4 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 [2024-11-20 10:36:00.262988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.850 [2024-11-20 10:36:00.264893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.850 [2024-11-20 10:36:00.265056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.850 [2024-11-20 10:36:00.265138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.850 [2024-11-20 10:36:00.265414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:56.850 [2024-11-20 10:36:00.265432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.850 [2024-11-20 10:36:00.265695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:56.850 [2024-11-20 10:36:00.265867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:56.850 [2024-11-20 10:36:00.265876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:56.850 [2024-11-20 10:36:00.266021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.850 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.851 "name": "raid_bdev1", 00:12:56.851 "uuid": "4a2734a1-48f3-4ffc-95ba-1c575229fc32", 00:12:56.851 "strip_size_kb": 0, 00:12:56.851 "state": "online", 00:12:56.851 "raid_level": "raid1", 00:12:56.851 "superblock": true, 00:12:56.851 "num_base_bdevs": 4, 00:12:56.851 "num_base_bdevs_discovered": 4, 00:12:56.851 "num_base_bdevs_operational": 4, 00:12:56.851 "base_bdevs_list": [ 00:12:56.851 { 00:12:56.851 "name": "BaseBdev1", 00:12:56.851 "uuid": "98922cb9-a454-50dd-8061-af1c6e017e93", 00:12:56.851 "is_configured": true, 00:12:56.851 "data_offset": 2048, 00:12:56.851 "data_size": 63488 00:12:56.851 }, 00:12:56.851 { 00:12:56.851 "name": "BaseBdev2", 00:12:56.851 "uuid": "a0b00fd7-7b02-52ed-bd20-8b48a7e647b9", 00:12:56.851 "is_configured": true, 00:12:56.851 "data_offset": 2048, 00:12:56.851 "data_size": 63488 00:12:56.851 }, 00:12:56.851 { 00:12:56.851 "name": "BaseBdev3", 00:12:56.851 "uuid": "b22a3b52-ad06-5605-baea-6a098e63c711", 00:12:56.851 "is_configured": true, 00:12:56.851 "data_offset": 2048, 00:12:56.851 "data_size": 63488 00:12:56.851 }, 00:12:56.851 { 00:12:56.851 "name": "BaseBdev4", 00:12:56.851 "uuid": "5950a578-e4e6-50fa-a85c-67080880604b", 00:12:56.851 "is_configured": true, 00:12:56.851 "data_offset": 2048, 00:12:56.851 "data_size": 63488 00:12:56.851 } 00:12:56.851 ] 00:12:56.851 }' 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.851 10:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:57.421 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:57.421 [2024-11-20 10:36:00.819421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.359 [2024-11-20 10:36:01.730118] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:58.359 [2024-11-20 10:36:01.730273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.359 [2024-11-20 10:36:01.730543] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.359 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.360 "name": "raid_bdev1", 00:12:58.360 "uuid": "4a2734a1-48f3-4ffc-95ba-1c575229fc32", 00:12:58.360 "strip_size_kb": 0, 00:12:58.360 "state": "online", 00:12:58.360 "raid_level": "raid1", 00:12:58.360 "superblock": true, 00:12:58.360 "num_base_bdevs": 4, 00:12:58.360 "num_base_bdevs_discovered": 3, 00:12:58.360 "num_base_bdevs_operational": 3, 00:12:58.360 "base_bdevs_list": [ 00:12:58.360 { 00:12:58.360 "name": null, 00:12:58.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.360 "is_configured": false, 00:12:58.360 "data_offset": 0, 00:12:58.360 "data_size": 63488 00:12:58.360 }, 00:12:58.360 { 00:12:58.360 "name": "BaseBdev2", 00:12:58.360 "uuid": "a0b00fd7-7b02-52ed-bd20-8b48a7e647b9", 00:12:58.360 "is_configured": true, 00:12:58.360 "data_offset": 2048, 00:12:58.360 "data_size": 63488 00:12:58.360 }, 00:12:58.360 { 00:12:58.360 "name": "BaseBdev3", 00:12:58.360 "uuid": "b22a3b52-ad06-5605-baea-6a098e63c711", 00:12:58.360 "is_configured": true, 00:12:58.360 "data_offset": 2048, 00:12:58.360 "data_size": 63488 00:12:58.360 }, 00:12:58.360 { 00:12:58.360 "name": "BaseBdev4", 00:12:58.360 "uuid": "5950a578-e4e6-50fa-a85c-67080880604b", 00:12:58.360 "is_configured": true, 00:12:58.360 "data_offset": 2048, 00:12:58.360 "data_size": 63488 00:12:58.360 } 00:12:58.360 ] 00:12:58.360 }' 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.360 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.928 [2024-11-20 10:36:02.254286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.928 [2024-11-20 10:36:02.254422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.928 [2024-11-20 10:36:02.257527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.928 [2024-11-20 10:36:02.257621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.928 [2024-11-20 10:36:02.257737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.928 [2024-11-20 10:36:02.257747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:58.928 { 00:12:58.928 "results": [ 00:12:58.928 { 00:12:58.928 "job": "raid_bdev1", 00:12:58.928 "core_mask": "0x1", 00:12:58.928 "workload": "randrw", 00:12:58.928 "percentage": 50, 00:12:58.928 "status": "finished", 00:12:58.928 "queue_depth": 1, 00:12:58.928 "io_size": 131072, 00:12:58.928 "runtime": 1.435826, 00:12:58.928 "iops": 11313.348553376245, 00:12:58.928 "mibps": 1414.1685691720306, 00:12:58.928 "io_failed": 0, 00:12:58.928 "io_timeout": 0, 00:12:58.928 "avg_latency_us": 85.67891961990131, 00:12:58.928 "min_latency_us": 23.58777292576419, 00:12:58.928 "max_latency_us": 1345.0620087336245 00:12:58.928 } 00:12:58.928 ], 00:12:58.928 "core_count": 1 00:12:58.928 } 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75343 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75343 ']' 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75343 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75343 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.928 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.928 killing process with pid 75343 00:12:58.929 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75343' 00:12:58.929 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75343 00:12:58.929 [2024-11-20 10:36:02.303098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.929 10:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75343 00:12:59.188 [2024-11-20 10:36:02.636458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3VTiAAlA3L 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:00.565 ************************************ 00:13:00.565 END TEST raid_write_error_test 00:13:00.565 ************************************ 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:00.565 00:13:00.565 real 0m4.773s 00:13:00.565 user 0m5.698s 00:13:00.565 sys 0m0.590s 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.565 10:36:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.565 10:36:03 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:00.565 10:36:03 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:00.565 10:36:03 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:00.565 10:36:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:00.565 10:36:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.565 10:36:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.565 ************************************ 00:13:00.565 START TEST raid_rebuild_test 00:13:00.565 ************************************ 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75492 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75492 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75492 ']' 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.565 10:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.565 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:00.565 Zero copy mechanism will not be used. 00:13:00.565 [2024-11-20 10:36:03.972719] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:00.565 [2024-11-20 10:36:03.972849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75492 ] 00:13:00.824 [2024-11-20 10:36:04.146815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.824 [2024-11-20 10:36:04.259504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.083 [2024-11-20 10:36:04.473442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.083 [2024-11-20 10:36:04.473511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 BaseBdev1_malloc 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 [2024-11-20 10:36:04.866867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.651 [2024-11-20 10:36:04.866950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.651 [2024-11-20 10:36:04.866977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.651 [2024-11-20 10:36:04.866988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.651 [2024-11-20 10:36:04.869172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.651 [2024-11-20 10:36:04.869259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.651 BaseBdev1 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 BaseBdev2_malloc 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 [2024-11-20 10:36:04.919473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:01.651 [2024-11-20 10:36:04.919568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.651 [2024-11-20 10:36:04.919588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.651 [2024-11-20 10:36:04.919599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.651 [2024-11-20 10:36:04.921716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.651 [2024-11-20 10:36:04.921817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.651 BaseBdev2 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 spare_malloc 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 spare_delay 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.651 10:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.651 [2024-11-20 10:36:04.998192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.651 [2024-11-20 10:36:04.998262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.651 [2024-11-20 10:36:04.998286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:01.652 [2024-11-20 10:36:04.998297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.652 [2024-11-20 10:36:05.000518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.652 [2024-11-20 10:36:05.000609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.652 spare 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.652 [2024-11-20 10:36:05.010199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.652 [2024-11-20 10:36:05.012028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.652 [2024-11-20 10:36:05.012172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:01.652 [2024-11-20 10:36:05.012189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:01.652 [2024-11-20 10:36:05.012486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:01.652 [2024-11-20 10:36:05.012668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:01.652 [2024-11-20 10:36:05.012680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:01.652 [2024-11-20 10:36:05.012840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.652 "name": "raid_bdev1", 00:13:01.652 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:01.652 "strip_size_kb": 0, 00:13:01.652 "state": "online", 00:13:01.652 "raid_level": "raid1", 00:13:01.652 "superblock": false, 00:13:01.652 "num_base_bdevs": 2, 00:13:01.652 "num_base_bdevs_discovered": 2, 00:13:01.652 "num_base_bdevs_operational": 2, 00:13:01.652 "base_bdevs_list": [ 00:13:01.652 { 00:13:01.652 "name": "BaseBdev1", 00:13:01.652 "uuid": "1fc745e5-56e9-5944-b200-7e3552259798", 00:13:01.652 "is_configured": true, 00:13:01.652 "data_offset": 0, 00:13:01.652 "data_size": 65536 00:13:01.652 }, 00:13:01.652 { 00:13:01.652 "name": "BaseBdev2", 00:13:01.652 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:01.652 "is_configured": true, 00:13:01.652 "data_offset": 0, 00:13:01.652 "data_size": 65536 00:13:01.652 } 00:13:01.652 ] 00:13:01.652 }' 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.652 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:02.219 [2024-11-20 10:36:05.481723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:02.219 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.220 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:02.479 [2024-11-20 10:36:05.713059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:02.479 /dev/nbd0 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.479 1+0 records in 00:13:02.479 1+0 records out 00:13:02.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578599 s, 7.1 MB/s 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:02.479 10:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:06.673 65536+0 records in 00:13:06.673 65536+0 records out 00:13:06.673 33554432 bytes (34 MB, 32 MiB) copied, 4.11674 s, 8.2 MB/s 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.673 10:36:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.673 [2024-11-20 10:36:10.106226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.673 [2024-11-20 10:36:10.142284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.673 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.932 "name": "raid_bdev1", 00:13:06.932 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:06.932 "strip_size_kb": 0, 00:13:06.932 "state": "online", 00:13:06.932 "raid_level": "raid1", 00:13:06.932 "superblock": false, 00:13:06.932 "num_base_bdevs": 2, 00:13:06.932 "num_base_bdevs_discovered": 1, 00:13:06.932 "num_base_bdevs_operational": 1, 00:13:06.932 "base_bdevs_list": [ 00:13:06.932 { 00:13:06.932 "name": null, 00:13:06.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.932 "is_configured": false, 00:13:06.932 "data_offset": 0, 00:13:06.932 "data_size": 65536 00:13:06.932 }, 00:13:06.932 { 00:13:06.932 "name": "BaseBdev2", 00:13:06.932 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:06.932 "is_configured": true, 00:13:06.932 "data_offset": 0, 00:13:06.932 "data_size": 65536 00:13:06.932 } 00:13:06.932 ] 00:13:06.932 }' 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.932 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.190 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.190 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.190 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.190 [2024-11-20 10:36:10.597514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.190 [2024-11-20 10:36:10.614259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:07.190 10:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.190 10:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:07.190 [2024-11-20 10:36:10.616302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.569 "name": "raid_bdev1", 00:13:08.569 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:08.569 "strip_size_kb": 0, 00:13:08.569 "state": "online", 00:13:08.569 "raid_level": "raid1", 00:13:08.569 "superblock": false, 00:13:08.569 "num_base_bdevs": 2, 00:13:08.569 "num_base_bdevs_discovered": 2, 00:13:08.569 "num_base_bdevs_operational": 2, 00:13:08.569 "process": { 00:13:08.569 "type": "rebuild", 00:13:08.569 "target": "spare", 00:13:08.569 "progress": { 00:13:08.569 "blocks": 20480, 00:13:08.569 "percent": 31 00:13:08.569 } 00:13:08.569 }, 00:13:08.569 "base_bdevs_list": [ 00:13:08.569 { 00:13:08.569 "name": "spare", 00:13:08.569 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:08.569 "is_configured": true, 00:13:08.569 "data_offset": 0, 00:13:08.569 "data_size": 65536 00:13:08.569 }, 00:13:08.569 { 00:13:08.569 "name": "BaseBdev2", 00:13:08.569 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:08.569 "is_configured": true, 00:13:08.569 "data_offset": 0, 00:13:08.569 "data_size": 65536 00:13:08.569 } 00:13:08.569 ] 00:13:08.569 }' 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.569 [2024-11-20 10:36:11.751781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.569 [2024-11-20 10:36:11.821996] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.569 [2024-11-20 10:36:11.822070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.569 [2024-11-20 10:36:11.822086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.569 [2024-11-20 10:36:11.822096] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.569 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.569 "name": "raid_bdev1", 00:13:08.569 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:08.569 "strip_size_kb": 0, 00:13:08.569 "state": "online", 00:13:08.569 "raid_level": "raid1", 00:13:08.570 "superblock": false, 00:13:08.570 "num_base_bdevs": 2, 00:13:08.570 "num_base_bdevs_discovered": 1, 00:13:08.570 "num_base_bdevs_operational": 1, 00:13:08.570 "base_bdevs_list": [ 00:13:08.570 { 00:13:08.570 "name": null, 00:13:08.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.570 "is_configured": false, 00:13:08.570 "data_offset": 0, 00:13:08.570 "data_size": 65536 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "name": "BaseBdev2", 00:13:08.570 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:08.570 "is_configured": true, 00:13:08.570 "data_offset": 0, 00:13:08.570 "data_size": 65536 00:13:08.570 } 00:13:08.570 ] 00:13:08.570 }' 00:13:08.570 10:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.570 10:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.139 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.140 "name": "raid_bdev1", 00:13:09.140 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:09.140 "strip_size_kb": 0, 00:13:09.140 "state": "online", 00:13:09.140 "raid_level": "raid1", 00:13:09.140 "superblock": false, 00:13:09.140 "num_base_bdevs": 2, 00:13:09.140 "num_base_bdevs_discovered": 1, 00:13:09.140 "num_base_bdevs_operational": 1, 00:13:09.140 "base_bdevs_list": [ 00:13:09.140 { 00:13:09.140 "name": null, 00:13:09.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.140 "is_configured": false, 00:13:09.140 "data_offset": 0, 00:13:09.140 "data_size": 65536 00:13:09.140 }, 00:13:09.140 { 00:13:09.140 "name": "BaseBdev2", 00:13:09.140 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:09.140 "is_configured": true, 00:13:09.140 "data_offset": 0, 00:13:09.140 "data_size": 65536 00:13:09.140 } 00:13:09.140 ] 00:13:09.140 }' 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.140 [2024-11-20 10:36:12.452637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.140 [2024-11-20 10:36:12.468469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.140 10:36:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:09.140 [2024-11-20 10:36:12.470203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.080 "name": "raid_bdev1", 00:13:10.080 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:10.080 "strip_size_kb": 0, 00:13:10.080 "state": "online", 00:13:10.080 "raid_level": "raid1", 00:13:10.080 "superblock": false, 00:13:10.080 "num_base_bdevs": 2, 00:13:10.080 "num_base_bdevs_discovered": 2, 00:13:10.080 "num_base_bdevs_operational": 2, 00:13:10.080 "process": { 00:13:10.080 "type": "rebuild", 00:13:10.080 "target": "spare", 00:13:10.080 "progress": { 00:13:10.080 "blocks": 20480, 00:13:10.080 "percent": 31 00:13:10.080 } 00:13:10.080 }, 00:13:10.080 "base_bdevs_list": [ 00:13:10.080 { 00:13:10.080 "name": "spare", 00:13:10.080 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:10.080 "is_configured": true, 00:13:10.080 "data_offset": 0, 00:13:10.080 "data_size": 65536 00:13:10.080 }, 00:13:10.080 { 00:13:10.080 "name": "BaseBdev2", 00:13:10.080 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:10.080 "is_configured": true, 00:13:10.080 "data_offset": 0, 00:13:10.080 "data_size": 65536 00:13:10.080 } 00:13:10.080 ] 00:13:10.080 }' 00:13:10.080 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.374 "name": "raid_bdev1", 00:13:10.374 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:10.374 "strip_size_kb": 0, 00:13:10.374 "state": "online", 00:13:10.374 "raid_level": "raid1", 00:13:10.374 "superblock": false, 00:13:10.374 "num_base_bdevs": 2, 00:13:10.374 "num_base_bdevs_discovered": 2, 00:13:10.374 "num_base_bdevs_operational": 2, 00:13:10.374 "process": { 00:13:10.374 "type": "rebuild", 00:13:10.374 "target": "spare", 00:13:10.374 "progress": { 00:13:10.374 "blocks": 22528, 00:13:10.374 "percent": 34 00:13:10.374 } 00:13:10.374 }, 00:13:10.374 "base_bdevs_list": [ 00:13:10.374 { 00:13:10.374 "name": "spare", 00:13:10.374 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:10.374 "is_configured": true, 00:13:10.374 "data_offset": 0, 00:13:10.374 "data_size": 65536 00:13:10.374 }, 00:13:10.374 { 00:13:10.374 "name": "BaseBdev2", 00:13:10.374 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:10.374 "is_configured": true, 00:13:10.374 "data_offset": 0, 00:13:10.374 "data_size": 65536 00:13:10.374 } 00:13:10.374 ] 00:13:10.374 }' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.374 10:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.311 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.311 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.311 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.311 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.311 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.311 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.570 "name": "raid_bdev1", 00:13:11.570 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:11.570 "strip_size_kb": 0, 00:13:11.570 "state": "online", 00:13:11.570 "raid_level": "raid1", 00:13:11.570 "superblock": false, 00:13:11.570 "num_base_bdevs": 2, 00:13:11.570 "num_base_bdevs_discovered": 2, 00:13:11.570 "num_base_bdevs_operational": 2, 00:13:11.570 "process": { 00:13:11.570 "type": "rebuild", 00:13:11.570 "target": "spare", 00:13:11.570 "progress": { 00:13:11.570 "blocks": 47104, 00:13:11.570 "percent": 71 00:13:11.570 } 00:13:11.570 }, 00:13:11.570 "base_bdevs_list": [ 00:13:11.570 { 00:13:11.570 "name": "spare", 00:13:11.570 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:11.570 "is_configured": true, 00:13:11.570 "data_offset": 0, 00:13:11.570 "data_size": 65536 00:13:11.570 }, 00:13:11.570 { 00:13:11.570 "name": "BaseBdev2", 00:13:11.570 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:11.570 "is_configured": true, 00:13:11.570 "data_offset": 0, 00:13:11.570 "data_size": 65536 00:13:11.570 } 00:13:11.570 ] 00:13:11.570 }' 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.570 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.571 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.506 [2024-11-20 10:36:15.685067] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:12.506 [2024-11-20 10:36:15.685150] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:12.506 [2024-11-20 10:36:15.685201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 10:36:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.766 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.766 "name": "raid_bdev1", 00:13:12.766 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:12.766 "strip_size_kb": 0, 00:13:12.766 "state": "online", 00:13:12.766 "raid_level": "raid1", 00:13:12.766 "superblock": false, 00:13:12.766 "num_base_bdevs": 2, 00:13:12.766 "num_base_bdevs_discovered": 2, 00:13:12.766 "num_base_bdevs_operational": 2, 00:13:12.766 "base_bdevs_list": [ 00:13:12.766 { 00:13:12.766 "name": "spare", 00:13:12.766 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:12.766 "is_configured": true, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 }, 00:13:12.766 { 00:13:12.766 "name": "BaseBdev2", 00:13:12.766 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:12.766 "is_configured": true, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 } 00:13:12.766 ] 00:13:12.766 }' 00:13:12.766 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.766 "name": "raid_bdev1", 00:13:12.766 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:12.766 "strip_size_kb": 0, 00:13:12.766 "state": "online", 00:13:12.766 "raid_level": "raid1", 00:13:12.766 "superblock": false, 00:13:12.766 "num_base_bdevs": 2, 00:13:12.766 "num_base_bdevs_discovered": 2, 00:13:12.766 "num_base_bdevs_operational": 2, 00:13:12.766 "base_bdevs_list": [ 00:13:12.766 { 00:13:12.766 "name": "spare", 00:13:12.766 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:12.766 "is_configured": true, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 }, 00:13:12.766 { 00:13:12.766 "name": "BaseBdev2", 00:13:12.766 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:12.766 "is_configured": true, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 } 00:13:12.766 ] 00:13:12.766 }' 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.766 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.026 "name": "raid_bdev1", 00:13:13.026 "uuid": "98a5e3db-2cad-4aa5-b9ce-479a51bdfa58", 00:13:13.026 "strip_size_kb": 0, 00:13:13.026 "state": "online", 00:13:13.026 "raid_level": "raid1", 00:13:13.026 "superblock": false, 00:13:13.026 "num_base_bdevs": 2, 00:13:13.026 "num_base_bdevs_discovered": 2, 00:13:13.026 "num_base_bdevs_operational": 2, 00:13:13.026 "base_bdevs_list": [ 00:13:13.026 { 00:13:13.026 "name": "spare", 00:13:13.026 "uuid": "b4e23476-4d5a-5456-9d1a-baf88cc2ab7d", 00:13:13.026 "is_configured": true, 00:13:13.026 "data_offset": 0, 00:13:13.026 "data_size": 65536 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "name": "BaseBdev2", 00:13:13.026 "uuid": "96bc2061-4479-5f8b-b135-87e7ec85c770", 00:13:13.026 "is_configured": true, 00:13:13.026 "data_offset": 0, 00:13:13.026 "data_size": 65536 00:13:13.026 } 00:13:13.026 ] 00:13:13.026 }' 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.026 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.285 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.285 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.285 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.286 [2024-11-20 10:36:16.647476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.286 [2024-11-20 10:36:16.647566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.286 [2024-11-20 10:36:16.647685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.286 [2024-11-20 10:36:16.647788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.286 [2024-11-20 10:36:16.647857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.286 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:13.545 /dev/nbd0 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.545 1+0 records in 00:13:13.545 1+0 records out 00:13:13.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034078 s, 12.0 MB/s 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.545 10:36:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:13.805 /dev/nbd1 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.805 1+0 records in 00:13:13.805 1+0 records out 00:13:13.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530401 s, 7.7 MB/s 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.805 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.064 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.324 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75492 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75492 ']' 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75492 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75492 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.584 killing process with pid 75492 00:13:14.584 Received shutdown signal, test time was about 60.000000 seconds 00:13:14.584 00:13:14.584 Latency(us) 00:13:14.584 [2024-11-20T10:36:18.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.584 [2024-11-20T10:36:18.063Z] =================================================================================================================== 00:13:14.584 [2024-11-20T10:36:18.063Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75492' 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75492 00:13:14.584 [2024-11-20 10:36:17.917361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.584 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75492 00:13:14.843 [2024-11-20 10:36:18.230763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.224 ************************************ 00:13:16.224 END TEST raid_rebuild_test 00:13:16.224 ************************************ 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:16.224 00:13:16.224 real 0m15.432s 00:13:16.224 user 0m17.456s 00:13:16.224 sys 0m2.977s 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.224 10:36:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:16.224 10:36:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:16.224 10:36:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.224 10:36:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.224 ************************************ 00:13:16.224 START TEST raid_rebuild_test_sb 00:13:16.224 ************************************ 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75910 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75910 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75910 ']' 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.224 10:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.224 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.224 Zero copy mechanism will not be used. 00:13:16.224 [2024-11-20 10:36:19.467908] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:16.224 [2024-11-20 10:36:19.468024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75910 ] 00:13:16.224 [2024-11-20 10:36:19.639203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.485 [2024-11-20 10:36:19.751533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.485 [2024-11-20 10:36:19.957187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.485 [2024-11-20 10:36:19.957304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 BaseBdev1_malloc 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 [2024-11-20 10:36:20.331246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:17.054 [2024-11-20 10:36:20.331384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.054 [2024-11-20 10:36:20.331432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:17.054 [2024-11-20 10:36:20.331495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.054 [2024-11-20 10:36:20.333725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.054 [2024-11-20 10:36:20.333817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.054 BaseBdev1 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 BaseBdev2_malloc 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 [2024-11-20 10:36:20.387002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:17.054 [2024-11-20 10:36:20.387116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.054 [2024-11-20 10:36:20.387153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:17.054 [2024-11-20 10:36:20.387186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.054 [2024-11-20 10:36:20.389511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.054 [2024-11-20 10:36:20.389589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.054 BaseBdev2 00:13:17.054 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.055 spare_malloc 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.055 spare_delay 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.055 [2024-11-20 10:36:20.468833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:17.055 [2024-11-20 10:36:20.468906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.055 [2024-11-20 10:36:20.468927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:17.055 [2024-11-20 10:36:20.468937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.055 [2024-11-20 10:36:20.471160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.055 [2024-11-20 10:36:20.471204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:17.055 spare 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.055 [2024-11-20 10:36:20.480900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.055 [2024-11-20 10:36:20.482849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.055 [2024-11-20 10:36:20.483028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:17.055 [2024-11-20 10:36:20.483047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:17.055 [2024-11-20 10:36:20.483315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:17.055 [2024-11-20 10:36:20.483527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:17.055 [2024-11-20 10:36:20.483544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:17.055 [2024-11-20 10:36:20.483729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.055 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.314 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.314 "name": "raid_bdev1", 00:13:17.314 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:17.314 "strip_size_kb": 0, 00:13:17.314 "state": "online", 00:13:17.314 "raid_level": "raid1", 00:13:17.314 "superblock": true, 00:13:17.314 "num_base_bdevs": 2, 00:13:17.314 "num_base_bdevs_discovered": 2, 00:13:17.314 "num_base_bdevs_operational": 2, 00:13:17.314 "base_bdevs_list": [ 00:13:17.314 { 00:13:17.314 "name": "BaseBdev1", 00:13:17.314 "uuid": "a7652e8c-1a77-5d8e-ab68-064ceaf02ab9", 00:13:17.314 "is_configured": true, 00:13:17.314 "data_offset": 2048, 00:13:17.314 "data_size": 63488 00:13:17.314 }, 00:13:17.314 { 00:13:17.314 "name": "BaseBdev2", 00:13:17.314 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:17.314 "is_configured": true, 00:13:17.314 "data_offset": 2048, 00:13:17.314 "data_size": 63488 00:13:17.314 } 00:13:17.314 ] 00:13:17.314 }' 00:13:17.314 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.314 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:17.575 [2024-11-20 10:36:20.924488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.575 10:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.575 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:17.835 [2024-11-20 10:36:21.211727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:17.835 /dev/nbd0 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.835 1+0 records in 00:13:17.835 1+0 records out 00:13:17.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289377 s, 14.2 MB/s 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:17.835 10:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:22.033 63488+0 records in 00:13:22.033 63488+0 records out 00:13:22.033 32505856 bytes (33 MB, 31 MiB) copied, 3.78595 s, 8.6 MB/s 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.033 [2024-11-20 10:36:25.286754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.033 [2024-11-20 10:36:25.302817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.033 "name": "raid_bdev1", 00:13:22.033 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:22.033 "strip_size_kb": 0, 00:13:22.033 "state": "online", 00:13:22.033 "raid_level": "raid1", 00:13:22.033 "superblock": true, 00:13:22.033 "num_base_bdevs": 2, 00:13:22.033 "num_base_bdevs_discovered": 1, 00:13:22.033 "num_base_bdevs_operational": 1, 00:13:22.033 "base_bdevs_list": [ 00:13:22.033 { 00:13:22.033 "name": null, 00:13:22.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.033 "is_configured": false, 00:13:22.033 "data_offset": 0, 00:13:22.033 "data_size": 63488 00:13:22.033 }, 00:13:22.033 { 00:13:22.033 "name": "BaseBdev2", 00:13:22.033 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:22.033 "is_configured": true, 00:13:22.033 "data_offset": 2048, 00:13:22.033 "data_size": 63488 00:13:22.033 } 00:13:22.033 ] 00:13:22.033 }' 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.033 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.293 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.293 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.293 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.293 [2024-11-20 10:36:25.746070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.293 [2024-11-20 10:36:25.763017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:22.293 10:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.293 [2024-11-20 10:36:25.764936] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.293 10:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:23.676 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.676 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.677 "name": "raid_bdev1", 00:13:23.677 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:23.677 "strip_size_kb": 0, 00:13:23.677 "state": "online", 00:13:23.677 "raid_level": "raid1", 00:13:23.677 "superblock": true, 00:13:23.677 "num_base_bdevs": 2, 00:13:23.677 "num_base_bdevs_discovered": 2, 00:13:23.677 "num_base_bdevs_operational": 2, 00:13:23.677 "process": { 00:13:23.677 "type": "rebuild", 00:13:23.677 "target": "spare", 00:13:23.677 "progress": { 00:13:23.677 "blocks": 20480, 00:13:23.677 "percent": 32 00:13:23.677 } 00:13:23.677 }, 00:13:23.677 "base_bdevs_list": [ 00:13:23.677 { 00:13:23.677 "name": "spare", 00:13:23.677 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:23.677 "is_configured": true, 00:13:23.677 "data_offset": 2048, 00:13:23.677 "data_size": 63488 00:13:23.677 }, 00:13:23.677 { 00:13:23.677 "name": "BaseBdev2", 00:13:23.677 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:23.677 "is_configured": true, 00:13:23.677 "data_offset": 2048, 00:13:23.677 "data_size": 63488 00:13:23.677 } 00:13:23.677 ] 00:13:23.677 }' 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.677 10:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.677 [2024-11-20 10:36:26.928584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.677 [2024-11-20 10:36:26.970071] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.677 [2024-11-20 10:36:26.970201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.677 [2024-11-20 10:36:26.970236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.677 [2024-11-20 10:36:26.970260] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.677 "name": "raid_bdev1", 00:13:23.677 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:23.677 "strip_size_kb": 0, 00:13:23.677 "state": "online", 00:13:23.677 "raid_level": "raid1", 00:13:23.677 "superblock": true, 00:13:23.677 "num_base_bdevs": 2, 00:13:23.677 "num_base_bdevs_discovered": 1, 00:13:23.677 "num_base_bdevs_operational": 1, 00:13:23.677 "base_bdevs_list": [ 00:13:23.677 { 00:13:23.677 "name": null, 00:13:23.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.677 "is_configured": false, 00:13:23.677 "data_offset": 0, 00:13:23.677 "data_size": 63488 00:13:23.677 }, 00:13:23.677 { 00:13:23.677 "name": "BaseBdev2", 00:13:23.677 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:23.677 "is_configured": true, 00:13:23.677 "data_offset": 2048, 00:13:23.677 "data_size": 63488 00:13:23.677 } 00:13:23.677 ] 00:13:23.677 }' 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.677 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.248 "name": "raid_bdev1", 00:13:24.248 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:24.248 "strip_size_kb": 0, 00:13:24.248 "state": "online", 00:13:24.248 "raid_level": "raid1", 00:13:24.248 "superblock": true, 00:13:24.248 "num_base_bdevs": 2, 00:13:24.248 "num_base_bdevs_discovered": 1, 00:13:24.248 "num_base_bdevs_operational": 1, 00:13:24.248 "base_bdevs_list": [ 00:13:24.248 { 00:13:24.248 "name": null, 00:13:24.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.248 "is_configured": false, 00:13:24.248 "data_offset": 0, 00:13:24.248 "data_size": 63488 00:13:24.248 }, 00:13:24.248 { 00:13:24.248 "name": "BaseBdev2", 00:13:24.248 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:24.248 "is_configured": true, 00:13:24.248 "data_offset": 2048, 00:13:24.248 "data_size": 63488 00:13:24.248 } 00:13:24.248 ] 00:13:24.248 }' 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.248 [2024-11-20 10:36:27.660633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.248 [2024-11-20 10:36:27.678011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.248 10:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.248 [2024-11-20 10:36:27.680032] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.626 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.626 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.626 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.627 "name": "raid_bdev1", 00:13:25.627 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:25.627 "strip_size_kb": 0, 00:13:25.627 "state": "online", 00:13:25.627 "raid_level": "raid1", 00:13:25.627 "superblock": true, 00:13:25.627 "num_base_bdevs": 2, 00:13:25.627 "num_base_bdevs_discovered": 2, 00:13:25.627 "num_base_bdevs_operational": 2, 00:13:25.627 "process": { 00:13:25.627 "type": "rebuild", 00:13:25.627 "target": "spare", 00:13:25.627 "progress": { 00:13:25.627 "blocks": 20480, 00:13:25.627 "percent": 32 00:13:25.627 } 00:13:25.627 }, 00:13:25.627 "base_bdevs_list": [ 00:13:25.627 { 00:13:25.627 "name": "spare", 00:13:25.627 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:25.627 "is_configured": true, 00:13:25.627 "data_offset": 2048, 00:13:25.627 "data_size": 63488 00:13:25.627 }, 00:13:25.627 { 00:13:25.627 "name": "BaseBdev2", 00:13:25.627 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:25.627 "is_configured": true, 00:13:25.627 "data_offset": 2048, 00:13:25.627 "data_size": 63488 00:13:25.627 } 00:13:25.627 ] 00:13:25.627 }' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:25.627 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.627 "name": "raid_bdev1", 00:13:25.627 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:25.627 "strip_size_kb": 0, 00:13:25.627 "state": "online", 00:13:25.627 "raid_level": "raid1", 00:13:25.627 "superblock": true, 00:13:25.627 "num_base_bdevs": 2, 00:13:25.627 "num_base_bdevs_discovered": 2, 00:13:25.627 "num_base_bdevs_operational": 2, 00:13:25.627 "process": { 00:13:25.627 "type": "rebuild", 00:13:25.627 "target": "spare", 00:13:25.627 "progress": { 00:13:25.627 "blocks": 22528, 00:13:25.627 "percent": 35 00:13:25.627 } 00:13:25.627 }, 00:13:25.627 "base_bdevs_list": [ 00:13:25.627 { 00:13:25.627 "name": "spare", 00:13:25.627 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:25.627 "is_configured": true, 00:13:25.627 "data_offset": 2048, 00:13:25.627 "data_size": 63488 00:13:25.627 }, 00:13:25.627 { 00:13:25.627 "name": "BaseBdev2", 00:13:25.627 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:25.627 "is_configured": true, 00:13:25.627 "data_offset": 2048, 00:13:25.627 "data_size": 63488 00:13:25.627 } 00:13:25.627 ] 00:13:25.627 }' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.627 10:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.564 10:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.564 10:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.823 10:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.823 "name": "raid_bdev1", 00:13:26.823 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:26.823 "strip_size_kb": 0, 00:13:26.823 "state": "online", 00:13:26.823 "raid_level": "raid1", 00:13:26.823 "superblock": true, 00:13:26.823 "num_base_bdevs": 2, 00:13:26.823 "num_base_bdevs_discovered": 2, 00:13:26.823 "num_base_bdevs_operational": 2, 00:13:26.823 "process": { 00:13:26.823 "type": "rebuild", 00:13:26.823 "target": "spare", 00:13:26.823 "progress": { 00:13:26.823 "blocks": 47104, 00:13:26.823 "percent": 74 00:13:26.823 } 00:13:26.823 }, 00:13:26.823 "base_bdevs_list": [ 00:13:26.823 { 00:13:26.823 "name": "spare", 00:13:26.823 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:26.823 "is_configured": true, 00:13:26.823 "data_offset": 2048, 00:13:26.823 "data_size": 63488 00:13:26.823 }, 00:13:26.823 { 00:13:26.823 "name": "BaseBdev2", 00:13:26.823 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:26.823 "is_configured": true, 00:13:26.823 "data_offset": 2048, 00:13:26.823 "data_size": 63488 00:13:26.823 } 00:13:26.823 ] 00:13:26.823 }' 00:13:26.823 10:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.823 10:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.823 10:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.823 10:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.823 10:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.390 [2024-11-20 10:36:30.794156] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.390 [2024-11-20 10:36:30.794387] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.390 [2024-11-20 10:36:30.794619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.955 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.956 "name": "raid_bdev1", 00:13:27.956 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:27.956 "strip_size_kb": 0, 00:13:27.956 "state": "online", 00:13:27.956 "raid_level": "raid1", 00:13:27.956 "superblock": true, 00:13:27.956 "num_base_bdevs": 2, 00:13:27.956 "num_base_bdevs_discovered": 2, 00:13:27.956 "num_base_bdevs_operational": 2, 00:13:27.956 "base_bdevs_list": [ 00:13:27.956 { 00:13:27.956 "name": "spare", 00:13:27.956 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:27.956 "is_configured": true, 00:13:27.956 "data_offset": 2048, 00:13:27.956 "data_size": 63488 00:13:27.956 }, 00:13:27.956 { 00:13:27.956 "name": "BaseBdev2", 00:13:27.956 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:27.956 "is_configured": true, 00:13:27.956 "data_offset": 2048, 00:13:27.956 "data_size": 63488 00:13:27.956 } 00:13:27.956 ] 00:13:27.956 }' 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.956 "name": "raid_bdev1", 00:13:27.956 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:27.956 "strip_size_kb": 0, 00:13:27.956 "state": "online", 00:13:27.956 "raid_level": "raid1", 00:13:27.956 "superblock": true, 00:13:27.956 "num_base_bdevs": 2, 00:13:27.956 "num_base_bdevs_discovered": 2, 00:13:27.956 "num_base_bdevs_operational": 2, 00:13:27.956 "base_bdevs_list": [ 00:13:27.956 { 00:13:27.956 "name": "spare", 00:13:27.956 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:27.956 "is_configured": true, 00:13:27.956 "data_offset": 2048, 00:13:27.956 "data_size": 63488 00:13:27.956 }, 00:13:27.956 { 00:13:27.956 "name": "BaseBdev2", 00:13:27.956 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:27.956 "is_configured": true, 00:13:27.956 "data_offset": 2048, 00:13:27.956 "data_size": 63488 00:13:27.956 } 00:13:27.956 ] 00:13:27.956 }' 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.956 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.214 "name": "raid_bdev1", 00:13:28.214 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:28.214 "strip_size_kb": 0, 00:13:28.214 "state": "online", 00:13:28.214 "raid_level": "raid1", 00:13:28.214 "superblock": true, 00:13:28.214 "num_base_bdevs": 2, 00:13:28.214 "num_base_bdevs_discovered": 2, 00:13:28.214 "num_base_bdevs_operational": 2, 00:13:28.214 "base_bdevs_list": [ 00:13:28.214 { 00:13:28.214 "name": "spare", 00:13:28.214 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:28.214 "is_configured": true, 00:13:28.214 "data_offset": 2048, 00:13:28.214 "data_size": 63488 00:13:28.214 }, 00:13:28.214 { 00:13:28.214 "name": "BaseBdev2", 00:13:28.214 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:28.214 "is_configured": true, 00:13:28.214 "data_offset": 2048, 00:13:28.214 "data_size": 63488 00:13:28.214 } 00:13:28.214 ] 00:13:28.214 }' 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.214 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.473 [2024-11-20 10:36:31.902080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.473 [2024-11-20 10:36:31.902123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.473 [2024-11-20 10:36:31.902234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.473 [2024-11-20 10:36:31.902326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.473 [2024-11-20 10:36:31.902342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.473 10:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.731 10:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.731 /dev/nbd0 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.991 1+0 records in 00:13:28.991 1+0 records out 00:13:28.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403589 s, 10.1 MB/s 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.991 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:29.250 /dev/nbd1 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.250 1+0 records in 00:13:29.250 1+0 records out 00:13:29.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255244 s, 16.0 MB/s 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.250 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.510 10:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.772 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.034 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.034 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:30.034 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.035 [2024-11-20 10:36:33.258299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:30.035 [2024-11-20 10:36:33.258382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.035 [2024-11-20 10:36:33.258439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:30.035 [2024-11-20 10:36:33.258453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.035 [2024-11-20 10:36:33.261057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.035 [2024-11-20 10:36:33.261099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:30.035 [2024-11-20 10:36:33.261236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:30.035 [2024-11-20 10:36:33.261335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.035 [2024-11-20 10:36:33.261540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.035 spare 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.035 [2024-11-20 10:36:33.361486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:30.035 [2024-11-20 10:36:33.361616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.035 [2024-11-20 10:36:33.361986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:30.035 [2024-11-20 10:36:33.362221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:30.035 [2024-11-20 10:36:33.362233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:30.035 [2024-11-20 10:36:33.362498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.035 "name": "raid_bdev1", 00:13:30.035 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:30.035 "strip_size_kb": 0, 00:13:30.035 "state": "online", 00:13:30.035 "raid_level": "raid1", 00:13:30.035 "superblock": true, 00:13:30.035 "num_base_bdevs": 2, 00:13:30.035 "num_base_bdevs_discovered": 2, 00:13:30.035 "num_base_bdevs_operational": 2, 00:13:30.035 "base_bdevs_list": [ 00:13:30.035 { 00:13:30.035 "name": "spare", 00:13:30.035 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:30.035 "is_configured": true, 00:13:30.035 "data_offset": 2048, 00:13:30.035 "data_size": 63488 00:13:30.035 }, 00:13:30.035 { 00:13:30.035 "name": "BaseBdev2", 00:13:30.035 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:30.035 "is_configured": true, 00:13:30.035 "data_offset": 2048, 00:13:30.035 "data_size": 63488 00:13:30.035 } 00:13:30.035 ] 00:13:30.035 }' 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.035 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.632 "name": "raid_bdev1", 00:13:30.632 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:30.632 "strip_size_kb": 0, 00:13:30.632 "state": "online", 00:13:30.632 "raid_level": "raid1", 00:13:30.632 "superblock": true, 00:13:30.632 "num_base_bdevs": 2, 00:13:30.632 "num_base_bdevs_discovered": 2, 00:13:30.632 "num_base_bdevs_operational": 2, 00:13:30.632 "base_bdevs_list": [ 00:13:30.632 { 00:13:30.632 "name": "spare", 00:13:30.632 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 2048, 00:13:30.632 "data_size": 63488 00:13:30.632 }, 00:13:30.632 { 00:13:30.632 "name": "BaseBdev2", 00:13:30.632 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 2048, 00:13:30.632 "data_size": 63488 00:13:30.632 } 00:13:30.632 ] 00:13:30.632 }' 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.632 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.633 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.633 10:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:30.633 10:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.633 [2024-11-20 10:36:34.025362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.633 "name": "raid_bdev1", 00:13:30.633 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:30.633 "strip_size_kb": 0, 00:13:30.633 "state": "online", 00:13:30.633 "raid_level": "raid1", 00:13:30.633 "superblock": true, 00:13:30.633 "num_base_bdevs": 2, 00:13:30.633 "num_base_bdevs_discovered": 1, 00:13:30.633 "num_base_bdevs_operational": 1, 00:13:30.633 "base_bdevs_list": [ 00:13:30.633 { 00:13:30.633 "name": null, 00:13:30.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.633 "is_configured": false, 00:13:30.633 "data_offset": 0, 00:13:30.633 "data_size": 63488 00:13:30.633 }, 00:13:30.633 { 00:13:30.633 "name": "BaseBdev2", 00:13:30.633 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:30.633 "is_configured": true, 00:13:30.633 "data_offset": 2048, 00:13:30.633 "data_size": 63488 00:13:30.633 } 00:13:30.633 ] 00:13:30.633 }' 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.633 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.200 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.200 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 [2024-11-20 10:36:34.512574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.200 [2024-11-20 10:36:34.512868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.200 [2024-11-20 10:36:34.512896] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:31.200 [2024-11-20 10:36:34.512942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.200 [2024-11-20 10:36:34.532184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:31.200 10:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.200 10:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:31.200 [2024-11-20 10:36:34.534346] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.136 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.136 "name": "raid_bdev1", 00:13:32.136 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:32.136 "strip_size_kb": 0, 00:13:32.136 "state": "online", 00:13:32.136 "raid_level": "raid1", 00:13:32.136 "superblock": true, 00:13:32.136 "num_base_bdevs": 2, 00:13:32.136 "num_base_bdevs_discovered": 2, 00:13:32.136 "num_base_bdevs_operational": 2, 00:13:32.136 "process": { 00:13:32.136 "type": "rebuild", 00:13:32.136 "target": "spare", 00:13:32.136 "progress": { 00:13:32.136 "blocks": 20480, 00:13:32.136 "percent": 32 00:13:32.136 } 00:13:32.136 }, 00:13:32.136 "base_bdevs_list": [ 00:13:32.136 { 00:13:32.136 "name": "spare", 00:13:32.136 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:32.136 "is_configured": true, 00:13:32.136 "data_offset": 2048, 00:13:32.137 "data_size": 63488 00:13:32.137 }, 00:13:32.137 { 00:13:32.137 "name": "BaseBdev2", 00:13:32.137 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:32.137 "is_configured": true, 00:13:32.137 "data_offset": 2048, 00:13:32.137 "data_size": 63488 00:13:32.137 } 00:13:32.137 ] 00:13:32.137 }' 00:13:32.137 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.395 [2024-11-20 10:36:35.658080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.395 [2024-11-20 10:36:35.740365] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.395 [2024-11-20 10:36:35.740468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.395 [2024-11-20 10:36:35.740487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.395 [2024-11-20 10:36:35.740498] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.395 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.395 "name": "raid_bdev1", 00:13:32.395 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:32.396 "strip_size_kb": 0, 00:13:32.396 "state": "online", 00:13:32.396 "raid_level": "raid1", 00:13:32.396 "superblock": true, 00:13:32.396 "num_base_bdevs": 2, 00:13:32.396 "num_base_bdevs_discovered": 1, 00:13:32.396 "num_base_bdevs_operational": 1, 00:13:32.396 "base_bdevs_list": [ 00:13:32.396 { 00:13:32.396 "name": null, 00:13:32.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.396 "is_configured": false, 00:13:32.396 "data_offset": 0, 00:13:32.396 "data_size": 63488 00:13:32.396 }, 00:13:32.396 { 00:13:32.396 "name": "BaseBdev2", 00:13:32.396 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:32.396 "is_configured": true, 00:13:32.396 "data_offset": 2048, 00:13:32.396 "data_size": 63488 00:13:32.396 } 00:13:32.396 ] 00:13:32.396 }' 00:13:32.396 10:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.396 10:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.963 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.964 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.964 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.964 [2024-11-20 10:36:36.210656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.964 [2024-11-20 10:36:36.210793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.964 [2024-11-20 10:36:36.210862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:32.964 [2024-11-20 10:36:36.210898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.964 [2024-11-20 10:36:36.211460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.964 [2024-11-20 10:36:36.211534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.964 [2024-11-20 10:36:36.211691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:32.964 [2024-11-20 10:36:36.211737] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:32.964 [2024-11-20 10:36:36.211783] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.964 [2024-11-20 10:36:36.211836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.964 [2024-11-20 10:36:36.227479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:32.964 spare 00:13:32.964 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.964 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:32.964 [2024-11-20 10:36:36.229476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.923 "name": "raid_bdev1", 00:13:33.923 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:33.923 "strip_size_kb": 0, 00:13:33.923 "state": "online", 00:13:33.923 "raid_level": "raid1", 00:13:33.923 "superblock": true, 00:13:33.923 "num_base_bdevs": 2, 00:13:33.923 "num_base_bdevs_discovered": 2, 00:13:33.923 "num_base_bdevs_operational": 2, 00:13:33.923 "process": { 00:13:33.923 "type": "rebuild", 00:13:33.923 "target": "spare", 00:13:33.923 "progress": { 00:13:33.923 "blocks": 20480, 00:13:33.923 "percent": 32 00:13:33.923 } 00:13:33.923 }, 00:13:33.923 "base_bdevs_list": [ 00:13:33.923 { 00:13:33.923 "name": "spare", 00:13:33.923 "uuid": "7f2b159a-5cf2-5670-9374-435a480deb7f", 00:13:33.923 "is_configured": true, 00:13:33.923 "data_offset": 2048, 00:13:33.923 "data_size": 63488 00:13:33.923 }, 00:13:33.923 { 00:13:33.923 "name": "BaseBdev2", 00:13:33.923 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:33.923 "is_configured": true, 00:13:33.923 "data_offset": 2048, 00:13:33.923 "data_size": 63488 00:13:33.923 } 00:13:33.923 ] 00:13:33.923 }' 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.923 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.923 [2024-11-20 10:36:37.368657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.193 [2024-11-20 10:36:37.434861] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.193 [2024-11-20 10:36:37.434927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.193 [2024-11-20 10:36:37.434945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.193 [2024-11-20 10:36:37.434952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.193 "name": "raid_bdev1", 00:13:34.193 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:34.193 "strip_size_kb": 0, 00:13:34.193 "state": "online", 00:13:34.193 "raid_level": "raid1", 00:13:34.193 "superblock": true, 00:13:34.193 "num_base_bdevs": 2, 00:13:34.193 "num_base_bdevs_discovered": 1, 00:13:34.193 "num_base_bdevs_operational": 1, 00:13:34.193 "base_bdevs_list": [ 00:13:34.193 { 00:13:34.193 "name": null, 00:13:34.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.193 "is_configured": false, 00:13:34.193 "data_offset": 0, 00:13:34.193 "data_size": 63488 00:13:34.193 }, 00:13:34.193 { 00:13:34.193 "name": "BaseBdev2", 00:13:34.193 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:34.193 "is_configured": true, 00:13:34.193 "data_offset": 2048, 00:13:34.193 "data_size": 63488 00:13:34.193 } 00:13:34.193 ] 00:13:34.193 }' 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.193 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.450 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.450 "name": "raid_bdev1", 00:13:34.450 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:34.450 "strip_size_kb": 0, 00:13:34.450 "state": "online", 00:13:34.450 "raid_level": "raid1", 00:13:34.450 "superblock": true, 00:13:34.450 "num_base_bdevs": 2, 00:13:34.450 "num_base_bdevs_discovered": 1, 00:13:34.450 "num_base_bdevs_operational": 1, 00:13:34.450 "base_bdevs_list": [ 00:13:34.450 { 00:13:34.450 "name": null, 00:13:34.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.450 "is_configured": false, 00:13:34.450 "data_offset": 0, 00:13:34.450 "data_size": 63488 00:13:34.450 }, 00:13:34.450 { 00:13:34.450 "name": "BaseBdev2", 00:13:34.450 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:34.451 "is_configured": true, 00:13:34.451 "data_offset": 2048, 00:13:34.451 "data_size": 63488 00:13:34.451 } 00:13:34.451 ] 00:13:34.451 }' 00:13:34.451 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.709 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.709 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.709 [2024-11-20 10:36:38.021504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.709 [2024-11-20 10:36:38.021570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.709 [2024-11-20 10:36:38.021595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:34.709 [2024-11-20 10:36:38.021616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.709 [2024-11-20 10:36:38.022108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.709 [2024-11-20 10:36:38.022126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.709 [2024-11-20 10:36:38.022224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:34.709 [2024-11-20 10:36:38.022240] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.709 [2024-11-20 10:36:38.022251] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:34.709 [2024-11-20 10:36:38.022263] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:34.709 BaseBdev1 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.709 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.646 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.646 "name": "raid_bdev1", 00:13:35.646 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:35.646 "strip_size_kb": 0, 00:13:35.646 "state": "online", 00:13:35.646 "raid_level": "raid1", 00:13:35.646 "superblock": true, 00:13:35.646 "num_base_bdevs": 2, 00:13:35.646 "num_base_bdevs_discovered": 1, 00:13:35.646 "num_base_bdevs_operational": 1, 00:13:35.646 "base_bdevs_list": [ 00:13:35.646 { 00:13:35.646 "name": null, 00:13:35.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.646 "is_configured": false, 00:13:35.646 "data_offset": 0, 00:13:35.646 "data_size": 63488 00:13:35.646 }, 00:13:35.646 { 00:13:35.646 "name": "BaseBdev2", 00:13:35.646 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:35.646 "is_configured": true, 00:13:35.646 "data_offset": 2048, 00:13:35.646 "data_size": 63488 00:13:35.647 } 00:13:35.647 ] 00:13:35.647 }' 00:13:35.647 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.647 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.216 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.216 "name": "raid_bdev1", 00:13:36.216 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:36.216 "strip_size_kb": 0, 00:13:36.216 "state": "online", 00:13:36.216 "raid_level": "raid1", 00:13:36.216 "superblock": true, 00:13:36.216 "num_base_bdevs": 2, 00:13:36.216 "num_base_bdevs_discovered": 1, 00:13:36.216 "num_base_bdevs_operational": 1, 00:13:36.216 "base_bdevs_list": [ 00:13:36.216 { 00:13:36.216 "name": null, 00:13:36.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.216 "is_configured": false, 00:13:36.216 "data_offset": 0, 00:13:36.216 "data_size": 63488 00:13:36.216 }, 00:13:36.216 { 00:13:36.216 "name": "BaseBdev2", 00:13:36.216 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:36.216 "is_configured": true, 00:13:36.216 "data_offset": 2048, 00:13:36.216 "data_size": 63488 00:13:36.216 } 00:13:36.217 ] 00:13:36.217 }' 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.217 [2024-11-20 10:36:39.650910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.217 [2024-11-20 10:36:39.651092] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:36.217 [2024-11-20 10:36:39.651113] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:36.217 request: 00:13:36.217 { 00:13:36.217 "base_bdev": "BaseBdev1", 00:13:36.217 "raid_bdev": "raid_bdev1", 00:13:36.217 "method": "bdev_raid_add_base_bdev", 00:13:36.217 "req_id": 1 00:13:36.217 } 00:13:36.217 Got JSON-RPC error response 00:13:36.217 response: 00:13:36.217 { 00:13:36.217 "code": -22, 00:13:36.217 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:36.217 } 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:36.217 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.599 "name": "raid_bdev1", 00:13:37.599 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:37.599 "strip_size_kb": 0, 00:13:37.599 "state": "online", 00:13:37.599 "raid_level": "raid1", 00:13:37.599 "superblock": true, 00:13:37.599 "num_base_bdevs": 2, 00:13:37.599 "num_base_bdevs_discovered": 1, 00:13:37.599 "num_base_bdevs_operational": 1, 00:13:37.599 "base_bdevs_list": [ 00:13:37.599 { 00:13:37.599 "name": null, 00:13:37.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.599 "is_configured": false, 00:13:37.599 "data_offset": 0, 00:13:37.599 "data_size": 63488 00:13:37.599 }, 00:13:37.599 { 00:13:37.599 "name": "BaseBdev2", 00:13:37.599 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:37.599 "is_configured": true, 00:13:37.599 "data_offset": 2048, 00:13:37.599 "data_size": 63488 00:13:37.599 } 00:13:37.599 ] 00:13:37.599 }' 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.599 10:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.857 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.857 "name": "raid_bdev1", 00:13:37.857 "uuid": "cb365719-babc-4d1a-b7ed-c20f4b8099ca", 00:13:37.857 "strip_size_kb": 0, 00:13:37.857 "state": "online", 00:13:37.857 "raid_level": "raid1", 00:13:37.857 "superblock": true, 00:13:37.857 "num_base_bdevs": 2, 00:13:37.857 "num_base_bdevs_discovered": 1, 00:13:37.857 "num_base_bdevs_operational": 1, 00:13:37.857 "base_bdevs_list": [ 00:13:37.857 { 00:13:37.857 "name": null, 00:13:37.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.857 "is_configured": false, 00:13:37.858 "data_offset": 0, 00:13:37.858 "data_size": 63488 00:13:37.858 }, 00:13:37.858 { 00:13:37.858 "name": "BaseBdev2", 00:13:37.858 "uuid": "f064f389-d2d5-5d02-8df5-fe19527c755b", 00:13:37.858 "is_configured": true, 00:13:37.858 "data_offset": 2048, 00:13:37.858 "data_size": 63488 00:13:37.858 } 00:13:37.858 ] 00:13:37.858 }' 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75910 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75910 ']' 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75910 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.858 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75910 00:13:38.117 killing process with pid 75910 00:13:38.117 Received shutdown signal, test time was about 60.000000 seconds 00:13:38.117 00:13:38.117 Latency(us) 00:13:38.117 [2024-11-20T10:36:41.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.117 [2024-11-20T10:36:41.596Z] =================================================================================================================== 00:13:38.117 [2024-11-20T10:36:41.596Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.117 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.117 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.117 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75910' 00:13:38.117 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75910 00:13:38.117 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75910 00:13:38.117 [2024-11-20 10:36:41.342216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.117 [2024-11-20 10:36:41.342369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.117 [2024-11-20 10:36:41.342433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.117 [2024-11-20 10:36:41.342447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:38.377 [2024-11-20 10:36:41.711776] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.761 10:36:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:39.761 ************************************ 00:13:39.761 END TEST raid_rebuild_test_sb 00:13:39.761 ************************************ 00:13:39.761 00:13:39.761 real 0m23.626s 00:13:39.761 user 0m29.176s 00:13:39.761 sys 0m3.589s 00:13:39.761 10:36:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.761 10:36:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.761 10:36:43 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:39.761 10:36:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:39.761 10:36:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.761 10:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.761 ************************************ 00:13:39.761 START TEST raid_rebuild_test_io 00:13:39.761 ************************************ 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:39.761 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76642 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76642 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76642 ']' 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.762 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.762 [2024-11-20 10:36:43.160783] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:39.762 [2024-11-20 10:36:43.160982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:39.762 Zero copy mechanism will not be used. 00:13:39.762 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76642 ] 00:13:40.021 [2024-11-20 10:36:43.331190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.021 [2024-11-20 10:36:43.444397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.280 [2024-11-20 10:36:43.649803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.280 [2024-11-20 10:36:43.649929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.540 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.540 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:40.540 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.540 10:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:40.540 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.540 10:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 BaseBdev1_malloc 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 [2024-11-20 10:36:44.031158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:40.800 [2024-11-20 10:36:44.031277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.800 [2024-11-20 10:36:44.031320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:40.800 [2024-11-20 10:36:44.031363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.800 [2024-11-20 10:36:44.033509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.800 [2024-11-20 10:36:44.033594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:40.800 BaseBdev1 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 BaseBdev2_malloc 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 [2024-11-20 10:36:44.086168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:40.800 [2024-11-20 10:36:44.086228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.800 [2024-11-20 10:36:44.086245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:40.800 [2024-11-20 10:36:44.086256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.800 [2024-11-20 10:36:44.088315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.800 [2024-11-20 10:36:44.088363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:40.800 BaseBdev2 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 spare_malloc 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 spare_delay 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 [2024-11-20 10:36:44.162960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:40.800 [2024-11-20 10:36:44.163069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.800 [2024-11-20 10:36:44.163092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:40.800 [2024-11-20 10:36:44.163103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.800 [2024-11-20 10:36:44.165247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.800 [2024-11-20 10:36:44.165286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:40.800 spare 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.800 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.800 [2024-11-20 10:36:44.174988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.800 [2024-11-20 10:36:44.176761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.800 [2024-11-20 10:36:44.176856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:40.800 [2024-11-20 10:36:44.176870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:40.800 [2024-11-20 10:36:44.177098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:40.800 [2024-11-20 10:36:44.177261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:40.800 [2024-11-20 10:36:44.177271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:40.800 [2024-11-20 10:36:44.177432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.801 "name": "raid_bdev1", 00:13:40.801 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:40.801 "strip_size_kb": 0, 00:13:40.801 "state": "online", 00:13:40.801 "raid_level": "raid1", 00:13:40.801 "superblock": false, 00:13:40.801 "num_base_bdevs": 2, 00:13:40.801 "num_base_bdevs_discovered": 2, 00:13:40.801 "num_base_bdevs_operational": 2, 00:13:40.801 "base_bdevs_list": [ 00:13:40.801 { 00:13:40.801 "name": "BaseBdev1", 00:13:40.801 "uuid": "20ff7cda-8b36-5024-9b8c-f4ae3a66c17b", 00:13:40.801 "is_configured": true, 00:13:40.801 "data_offset": 0, 00:13:40.801 "data_size": 65536 00:13:40.801 }, 00:13:40.801 { 00:13:40.801 "name": "BaseBdev2", 00:13:40.801 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:40.801 "is_configured": true, 00:13:40.801 "data_offset": 0, 00:13:40.801 "data_size": 65536 00:13:40.801 } 00:13:40.801 ] 00:13:40.801 }' 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.801 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.368 [2024-11-20 10:36:44.618510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:41.368 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.369 [2024-11-20 10:36:44.702084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.369 "name": "raid_bdev1", 00:13:41.369 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:41.369 "strip_size_kb": 0, 00:13:41.369 "state": "online", 00:13:41.369 "raid_level": "raid1", 00:13:41.369 "superblock": false, 00:13:41.369 "num_base_bdevs": 2, 00:13:41.369 "num_base_bdevs_discovered": 1, 00:13:41.369 "num_base_bdevs_operational": 1, 00:13:41.369 "base_bdevs_list": [ 00:13:41.369 { 00:13:41.369 "name": null, 00:13:41.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.369 "is_configured": false, 00:13:41.369 "data_offset": 0, 00:13:41.369 "data_size": 65536 00:13:41.369 }, 00:13:41.369 { 00:13:41.369 "name": "BaseBdev2", 00:13:41.369 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:41.369 "is_configured": true, 00:13:41.369 "data_offset": 0, 00:13:41.369 "data_size": 65536 00:13:41.369 } 00:13:41.369 ] 00:13:41.369 }' 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.369 10:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.369 [2024-11-20 10:36:44.798018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:41.369 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.369 Zero copy mechanism will not be used. 00:13:41.369 Running I/O for 60 seconds... 00:13:41.960 10:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:41.960 10:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.960 10:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.960 [2024-11-20 10:36:45.124438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.960 10:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.960 10:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:41.960 [2024-11-20 10:36:45.180461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:41.960 [2024-11-20 10:36:45.182454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.960 [2024-11-20 10:36:45.289957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:41.960 [2024-11-20 10:36:45.290621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.248 [2024-11-20 10:36:45.515440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.248 [2024-11-20 10:36:45.515879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.520 147.00 IOPS, 441.00 MiB/s [2024-11-20T10:36:45.999Z] [2024-11-20 10:36:45.848634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:42.520 [2024-11-20 10:36:45.956032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.521 [2024-11-20 10:36:45.956434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.781 [2024-11-20 10:36:46.203801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.781 "name": "raid_bdev1", 00:13:42.781 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:42.781 "strip_size_kb": 0, 00:13:42.781 "state": "online", 00:13:42.781 "raid_level": "raid1", 00:13:42.781 "superblock": false, 00:13:42.781 "num_base_bdevs": 2, 00:13:42.781 "num_base_bdevs_discovered": 2, 00:13:42.781 "num_base_bdevs_operational": 2, 00:13:42.781 "process": { 00:13:42.781 "type": "rebuild", 00:13:42.781 "target": "spare", 00:13:42.781 "progress": { 00:13:42.781 "blocks": 12288, 00:13:42.781 "percent": 18 00:13:42.781 } 00:13:42.781 }, 00:13:42.781 "base_bdevs_list": [ 00:13:42.781 { 00:13:42.781 "name": "spare", 00:13:42.781 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:42.781 "is_configured": true, 00:13:42.781 "data_offset": 0, 00:13:42.781 "data_size": 65536 00:13:42.781 }, 00:13:42.781 { 00:13:42.781 "name": "BaseBdev2", 00:13:42.781 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:42.781 "is_configured": true, 00:13:42.781 "data_offset": 0, 00:13:42.781 "data_size": 65536 00:13:42.781 } 00:13:42.781 ] 00:13:42.781 }' 00:13:42.781 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.041 [2024-11-20 10:36:46.312463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.041 [2024-11-20 10:36:46.325991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.041 [2024-11-20 10:36:46.420699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:43.041 [2024-11-20 10:36:46.433103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.041 [2024-11-20 10:36:46.440678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.041 [2024-11-20 10:36:46.440753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.041 [2024-11-20 10:36:46.440785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.041 [2024-11-20 10:36:46.478567] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.041 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.301 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.301 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.301 "name": "raid_bdev1", 00:13:43.301 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:43.301 "strip_size_kb": 0, 00:13:43.301 "state": "online", 00:13:43.301 "raid_level": "raid1", 00:13:43.301 "superblock": false, 00:13:43.301 "num_base_bdevs": 2, 00:13:43.301 "num_base_bdevs_discovered": 1, 00:13:43.301 "num_base_bdevs_operational": 1, 00:13:43.301 "base_bdevs_list": [ 00:13:43.301 { 00:13:43.301 "name": null, 00:13:43.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.301 "is_configured": false, 00:13:43.301 "data_offset": 0, 00:13:43.301 "data_size": 65536 00:13:43.301 }, 00:13:43.301 { 00:13:43.301 "name": "BaseBdev2", 00:13:43.301 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:43.301 "is_configured": true, 00:13:43.301 "data_offset": 0, 00:13:43.301 "data_size": 65536 00:13:43.301 } 00:13:43.301 ] 00:13:43.301 }' 00:13:43.301 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.301 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.560 152.50 IOPS, 457.50 MiB/s [2024-11-20T10:36:47.039Z] 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.560 "name": "raid_bdev1", 00:13:43.560 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:43.560 "strip_size_kb": 0, 00:13:43.560 "state": "online", 00:13:43.560 "raid_level": "raid1", 00:13:43.560 "superblock": false, 00:13:43.560 "num_base_bdevs": 2, 00:13:43.560 "num_base_bdevs_discovered": 1, 00:13:43.560 "num_base_bdevs_operational": 1, 00:13:43.560 "base_bdevs_list": [ 00:13:43.560 { 00:13:43.560 "name": null, 00:13:43.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.560 "is_configured": false, 00:13:43.560 "data_offset": 0, 00:13:43.560 "data_size": 65536 00:13:43.560 }, 00:13:43.560 { 00:13:43.560 "name": "BaseBdev2", 00:13:43.560 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:43.560 "is_configured": true, 00:13:43.560 "data_offset": 0, 00:13:43.560 "data_size": 65536 00:13:43.560 } 00:13:43.560 ] 00:13:43.560 }' 00:13:43.560 10:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.560 10:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.560 10:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.820 10:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.820 10:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.820 10:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.820 10:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.820 [2024-11-20 10:36:47.073784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.820 10:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.820 10:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:43.820 [2024-11-20 10:36:47.136129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:43.820 [2024-11-20 10:36:47.138101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.079 [2024-11-20 10:36:47.395073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.079 [2024-11-20 10:36:47.395395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.339 [2024-11-20 10:36:47.631058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:44.339 [2024-11-20 10:36:47.732568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:44.339 [2024-11-20 10:36:47.732875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:44.598 163.33 IOPS, 490.00 MiB/s [2024-11-20T10:36:48.077Z] [2024-11-20 10:36:47.977073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.856 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.856 "name": "raid_bdev1", 00:13:44.856 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:44.856 "strip_size_kb": 0, 00:13:44.857 "state": "online", 00:13:44.857 "raid_level": "raid1", 00:13:44.857 "superblock": false, 00:13:44.857 "num_base_bdevs": 2, 00:13:44.857 "num_base_bdevs_discovered": 2, 00:13:44.857 "num_base_bdevs_operational": 2, 00:13:44.857 "process": { 00:13:44.857 "type": "rebuild", 00:13:44.857 "target": "spare", 00:13:44.857 "progress": { 00:13:44.857 "blocks": 14336, 00:13:44.857 "percent": 21 00:13:44.857 } 00:13:44.857 }, 00:13:44.857 "base_bdevs_list": [ 00:13:44.857 { 00:13:44.857 "name": "spare", 00:13:44.857 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:44.857 "is_configured": true, 00:13:44.857 "data_offset": 0, 00:13:44.857 "data_size": 65536 00:13:44.857 }, 00:13:44.857 { 00:13:44.857 "name": "BaseBdev2", 00:13:44.857 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:44.857 "is_configured": true, 00:13:44.857 "data_offset": 0, 00:13:44.857 "data_size": 65536 00:13:44.857 } 00:13:44.857 ] 00:13:44.857 }' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.857 [2024-11-20 10:36:48.193590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.857 "name": "raid_bdev1", 00:13:44.857 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:44.857 "strip_size_kb": 0, 00:13:44.857 "state": "online", 00:13:44.857 "raid_level": "raid1", 00:13:44.857 "superblock": false, 00:13:44.857 "num_base_bdevs": 2, 00:13:44.857 "num_base_bdevs_discovered": 2, 00:13:44.857 "num_base_bdevs_operational": 2, 00:13:44.857 "process": { 00:13:44.857 "type": "rebuild", 00:13:44.857 "target": "spare", 00:13:44.857 "progress": { 00:13:44.857 "blocks": 16384, 00:13:44.857 "percent": 25 00:13:44.857 } 00:13:44.857 }, 00:13:44.857 "base_bdevs_list": [ 00:13:44.857 { 00:13:44.857 "name": "spare", 00:13:44.857 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:44.857 "is_configured": true, 00:13:44.857 "data_offset": 0, 00:13:44.857 "data_size": 65536 00:13:44.857 }, 00:13:44.857 { 00:13:44.857 "name": "BaseBdev2", 00:13:44.857 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:44.857 "is_configured": true, 00:13:44.857 "data_offset": 0, 00:13:44.857 "data_size": 65536 00:13:44.857 } 00:13:44.857 ] 00:13:44.857 }' 00:13:44.857 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.116 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.116 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.116 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.116 10:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.116 [2024-11-20 10:36:48.427986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:45.376 [2024-11-20 10:36:48.653034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:45.635 135.75 IOPS, 407.25 MiB/s [2024-11-20T10:36:49.114Z] [2024-11-20 10:36:49.084330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:45.894 [2024-11-20 10:36:49.338728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.158 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.158 "name": "raid_bdev1", 00:13:46.158 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:46.158 "strip_size_kb": 0, 00:13:46.158 "state": "online", 00:13:46.158 "raid_level": "raid1", 00:13:46.158 "superblock": false, 00:13:46.158 "num_base_bdevs": 2, 00:13:46.158 "num_base_bdevs_discovered": 2, 00:13:46.158 "num_base_bdevs_operational": 2, 00:13:46.158 "process": { 00:13:46.158 "type": "rebuild", 00:13:46.158 "target": "spare", 00:13:46.158 "progress": { 00:13:46.159 "blocks": 32768, 00:13:46.159 "percent": 50 00:13:46.159 } 00:13:46.159 }, 00:13:46.159 "base_bdevs_list": [ 00:13:46.159 { 00:13:46.159 "name": "spare", 00:13:46.159 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:46.159 "is_configured": true, 00:13:46.159 "data_offset": 0, 00:13:46.159 "data_size": 65536 00:13:46.159 }, 00:13:46.159 { 00:13:46.159 "name": "BaseBdev2", 00:13:46.159 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:46.159 "is_configured": true, 00:13:46.159 "data_offset": 0, 00:13:46.159 "data_size": 65536 00:13:46.159 } 00:13:46.159 ] 00:13:46.159 }' 00:13:46.159 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.159 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.159 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.159 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.159 10:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.159 [2024-11-20 10:36:49.563440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:47.355 117.60 IOPS, 352.80 MiB/s [2024-11-20T10:36:50.834Z] 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.355 "name": "raid_bdev1", 00:13:47.355 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:47.355 "strip_size_kb": 0, 00:13:47.355 "state": "online", 00:13:47.355 "raid_level": "raid1", 00:13:47.355 "superblock": false, 00:13:47.355 "num_base_bdevs": 2, 00:13:47.355 "num_base_bdevs_discovered": 2, 00:13:47.355 "num_base_bdevs_operational": 2, 00:13:47.355 "process": { 00:13:47.355 "type": "rebuild", 00:13:47.355 "target": "spare", 00:13:47.355 "progress": { 00:13:47.355 "blocks": 53248, 00:13:47.355 "percent": 81 00:13:47.355 } 00:13:47.355 }, 00:13:47.355 "base_bdevs_list": [ 00:13:47.355 { 00:13:47.355 "name": "spare", 00:13:47.355 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:47.355 "is_configured": true, 00:13:47.355 "data_offset": 0, 00:13:47.355 "data_size": 65536 00:13:47.355 }, 00:13:47.355 { 00:13:47.355 "name": "BaseBdev2", 00:13:47.355 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:47.355 "is_configured": true, 00:13:47.355 "data_offset": 0, 00:13:47.355 "data_size": 65536 00:13:47.355 } 00:13:47.355 ] 00:13:47.355 }' 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.355 10:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.924 105.17 IOPS, 315.50 MiB/s [2024-11-20T10:36:51.403Z] [2024-11-20 10:36:51.216588] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:47.924 [2024-11-20 10:36:51.316420] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:47.924 [2024-11-20 10:36:51.324981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.491 "name": "raid_bdev1", 00:13:48.491 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:48.491 "strip_size_kb": 0, 00:13:48.491 "state": "online", 00:13:48.491 "raid_level": "raid1", 00:13:48.491 "superblock": false, 00:13:48.491 "num_base_bdevs": 2, 00:13:48.491 "num_base_bdevs_discovered": 2, 00:13:48.491 "num_base_bdevs_operational": 2, 00:13:48.491 "base_bdevs_list": [ 00:13:48.491 { 00:13:48.491 "name": "spare", 00:13:48.491 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:48.491 "is_configured": true, 00:13:48.491 "data_offset": 0, 00:13:48.491 "data_size": 65536 00:13:48.491 }, 00:13:48.491 { 00:13:48.491 "name": "BaseBdev2", 00:13:48.491 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:48.491 "is_configured": true, 00:13:48.491 "data_offset": 0, 00:13:48.491 "data_size": 65536 00:13:48.491 } 00:13:48.491 ] 00:13:48.491 }' 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.491 95.57 IOPS, 286.71 MiB/s [2024-11-20T10:36:51.970Z] 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.491 "name": "raid_bdev1", 00:13:48.491 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:48.491 "strip_size_kb": 0, 00:13:48.491 "state": "online", 00:13:48.491 "raid_level": "raid1", 00:13:48.491 "superblock": false, 00:13:48.491 "num_base_bdevs": 2, 00:13:48.491 "num_base_bdevs_discovered": 2, 00:13:48.491 "num_base_bdevs_operational": 2, 00:13:48.491 "base_bdevs_list": [ 00:13:48.491 { 00:13:48.491 "name": "spare", 00:13:48.491 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:48.491 "is_configured": true, 00:13:48.491 "data_offset": 0, 00:13:48.491 "data_size": 65536 00:13:48.491 }, 00:13:48.491 { 00:13:48.491 "name": "BaseBdev2", 00:13:48.491 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:48.491 "is_configured": true, 00:13:48.491 "data_offset": 0, 00:13:48.491 "data_size": 65536 00:13:48.491 } 00:13:48.491 ] 00:13:48.491 }' 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.491 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.750 10:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.750 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.750 "name": "raid_bdev1", 00:13:48.750 "uuid": "1371cc07-f12f-4ad8-9f86-f18598fa5f66", 00:13:48.750 "strip_size_kb": 0, 00:13:48.751 "state": "online", 00:13:48.751 "raid_level": "raid1", 00:13:48.751 "superblock": false, 00:13:48.751 "num_base_bdevs": 2, 00:13:48.751 "num_base_bdevs_discovered": 2, 00:13:48.751 "num_base_bdevs_operational": 2, 00:13:48.751 "base_bdevs_list": [ 00:13:48.751 { 00:13:48.751 "name": "spare", 00:13:48.751 "uuid": "e3babc82-7840-5bea-98f3-8d65325929b1", 00:13:48.751 "is_configured": true, 00:13:48.751 "data_offset": 0, 00:13:48.751 "data_size": 65536 00:13:48.751 }, 00:13:48.751 { 00:13:48.751 "name": "BaseBdev2", 00:13:48.751 "uuid": "8836b99d-ce20-5858-856a-cbf557959a2d", 00:13:48.751 "is_configured": true, 00:13:48.751 "data_offset": 0, 00:13:48.751 "data_size": 65536 00:13:48.751 } 00:13:48.751 ] 00:13:48.751 }' 00:13:48.751 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.751 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.009 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.009 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.009 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.009 [2024-11-20 10:36:52.451792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.009 [2024-11-20 10:36:52.451823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.267 00:13:49.267 Latency(us) 00:13:49.267 [2024-11-20T10:36:52.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.267 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:49.267 raid_bdev1 : 7.75 89.63 268.88 0.00 0.00 15480.95 318.38 116762.83 00:13:49.267 [2024-11-20T10:36:52.746Z] =================================================================================================================== 00:13:49.267 [2024-11-20T10:36:52.746Z] Total : 89.63 268.88 0.00 0.00 15480.95 318.38 116762.83 00:13:49.267 [2024-11-20 10:36:52.560321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.267 [2024-11-20 10:36:52.560363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.267 [2024-11-20 10:36:52.560468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.267 [2024-11-20 10:36:52.560481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:49.267 { 00:13:49.267 "results": [ 00:13:49.267 { 00:13:49.267 "job": "raid_bdev1", 00:13:49.267 "core_mask": "0x1", 00:13:49.267 "workload": "randrw", 00:13:49.267 "percentage": 50, 00:13:49.267 "status": "finished", 00:13:49.267 "queue_depth": 2, 00:13:49.267 "io_size": 3145728, 00:13:49.267 "runtime": 7.754428, 00:13:49.267 "iops": 89.62621098551692, 00:13:49.267 "mibps": 268.87863295655075, 00:13:49.267 "io_failed": 0, 00:13:49.267 "io_timeout": 0, 00:13:49.267 "avg_latency_us": 15480.94594577613, 00:13:49.267 "min_latency_us": 318.37903930131006, 00:13:49.267 "max_latency_us": 116762.82969432314 00:13:49.267 } 00:13:49.267 ], 00:13:49.267 "core_count": 1 00:13:49.267 } 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.267 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:49.526 /dev/nbd0 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.526 1+0 records in 00:13:49.526 1+0 records out 00:13:49.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355895 s, 11.5 MB/s 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:49.526 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.527 10:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:49.786 /dev/nbd1 00:13:49.786 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:49.786 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:49.786 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.787 1+0 records in 00:13:49.787 1+0 records out 00:13:49.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514174 s, 8.0 MB/s 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.787 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.046 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76642 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76642 ']' 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76642 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76642 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.305 killing process with pid 76642 00:13:50.305 Received shutdown signal, test time was about 8.982924 seconds 00:13:50.305 00:13:50.305 Latency(us) 00:13:50.305 [2024-11-20T10:36:53.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.305 [2024-11-20T10:36:53.784Z] =================================================================================================================== 00:13:50.305 [2024-11-20T10:36:53.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76642' 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76642 00:13:50.305 [2024-11-20 10:36:53.765688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.305 10:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76642 00:13:50.565 [2024-11-20 10:36:54.003590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:51.943 00:13:51.943 real 0m12.089s 00:13:51.943 user 0m15.230s 00:13:51.943 sys 0m1.391s 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.943 ************************************ 00:13:51.943 END TEST raid_rebuild_test_io 00:13:51.943 ************************************ 00:13:51.943 10:36:55 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:51.943 10:36:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.943 10:36:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.943 10:36:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.943 ************************************ 00:13:51.943 START TEST raid_rebuild_test_sb_io 00:13:51.943 ************************************ 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77018 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77018 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77018 ']' 00:13:51.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.943 10:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.943 [2024-11-20 10:36:55.312747] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:51.943 [2024-11-20 10:36:55.312951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.943 Zero copy mechanism will not be used. 00:13:51.943 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77018 ] 00:13:52.201 [2024-11-20 10:36:55.464831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.201 [2024-11-20 10:36:55.579497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.459 [2024-11-20 10:36:55.782406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.459 [2024-11-20 10:36:55.782533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.719 BaseBdev1_malloc 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.719 [2024-11-20 10:36:56.178095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:52.719 [2024-11-20 10:36:56.178225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.719 [2024-11-20 10:36:56.178251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.719 [2024-11-20 10:36:56.178263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.719 [2024-11-20 10:36:56.180344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.719 [2024-11-20 10:36:56.180390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.719 BaseBdev1 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.719 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.978 BaseBdev2_malloc 00:13:52.978 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.978 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:52.978 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.978 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.978 [2024-11-20 10:36:56.232551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:52.978 [2024-11-20 10:36:56.232680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.978 [2024-11-20 10:36:56.232704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.978 [2024-11-20 10:36:56.232717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.978 [2024-11-20 10:36:56.234801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.978 [2024-11-20 10:36:56.234840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.979 BaseBdev2 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.979 spare_malloc 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.979 spare_delay 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.979 [2024-11-20 10:36:56.313705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.979 [2024-11-20 10:36:56.313832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.979 [2024-11-20 10:36:56.313857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:52.979 [2024-11-20 10:36:56.313867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.979 [2024-11-20 10:36:56.316022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.979 [2024-11-20 10:36:56.316075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.979 spare 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.979 [2024-11-20 10:36:56.325743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.979 [2024-11-20 10:36:56.327470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.979 [2024-11-20 10:36:56.327633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.979 [2024-11-20 10:36:56.327650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.979 [2024-11-20 10:36:56.327870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:52.979 [2024-11-20 10:36:56.328018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.979 [2024-11-20 10:36:56.328027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.979 [2024-11-20 10:36:56.328169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.979 "name": "raid_bdev1", 00:13:52.979 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:52.979 "strip_size_kb": 0, 00:13:52.979 "state": "online", 00:13:52.979 "raid_level": "raid1", 00:13:52.979 "superblock": true, 00:13:52.979 "num_base_bdevs": 2, 00:13:52.979 "num_base_bdevs_discovered": 2, 00:13:52.979 "num_base_bdevs_operational": 2, 00:13:52.979 "base_bdevs_list": [ 00:13:52.979 { 00:13:52.979 "name": "BaseBdev1", 00:13:52.979 "uuid": "f84d0777-5711-5309-8eb6-1fa8bf71bf36", 00:13:52.979 "is_configured": true, 00:13:52.979 "data_offset": 2048, 00:13:52.979 "data_size": 63488 00:13:52.979 }, 00:13:52.979 { 00:13:52.979 "name": "BaseBdev2", 00:13:52.979 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:52.979 "is_configured": true, 00:13:52.979 "data_offset": 2048, 00:13:52.979 "data_size": 63488 00:13:52.979 } 00:13:52.979 ] 00:13:52.979 }' 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.979 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.554 [2024-11-20 10:36:56.809223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.554 [2024-11-20 10:36:56.896781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.554 "name": "raid_bdev1", 00:13:53.554 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:53.554 "strip_size_kb": 0, 00:13:53.554 "state": "online", 00:13:53.554 "raid_level": "raid1", 00:13:53.554 "superblock": true, 00:13:53.554 "num_base_bdevs": 2, 00:13:53.554 "num_base_bdevs_discovered": 1, 00:13:53.554 "num_base_bdevs_operational": 1, 00:13:53.554 "base_bdevs_list": [ 00:13:53.554 { 00:13:53.554 "name": null, 00:13:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.554 "is_configured": false, 00:13:53.554 "data_offset": 0, 00:13:53.554 "data_size": 63488 00:13:53.554 }, 00:13:53.554 { 00:13:53.554 "name": "BaseBdev2", 00:13:53.554 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:53.554 "is_configured": true, 00:13:53.554 "data_offset": 2048, 00:13:53.554 "data_size": 63488 00:13:53.554 } 00:13:53.554 ] 00:13:53.554 }' 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.554 10:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.554 [2024-11-20 10:36:57.001383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:53.554 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.554 Zero copy mechanism will not be used. 00:13:53.554 Running I/O for 60 seconds... 00:13:53.855 10:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.855 10:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.855 10:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 [2024-11-20 10:36:57.332854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.113 10:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 10:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:54.113 [2024-11-20 10:36:57.380650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:54.113 [2024-11-20 10:36:57.382673] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.113 [2024-11-20 10:36:57.509594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:54.371 [2024-11-20 10:36:57.735468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:54.371 [2024-11-20 10:36:57.735821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:54.629 208.00 IOPS, 624.00 MiB/s [2024-11-20T10:36:58.108Z] [2024-11-20 10:36:58.073625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:54.887 [2024-11-20 10:36:58.300050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:54.887 [2024-11-20 10:36:58.300394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.147 "name": "raid_bdev1", 00:13:55.147 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:55.147 "strip_size_kb": 0, 00:13:55.147 "state": "online", 00:13:55.147 "raid_level": "raid1", 00:13:55.147 "superblock": true, 00:13:55.147 "num_base_bdevs": 2, 00:13:55.147 "num_base_bdevs_discovered": 2, 00:13:55.147 "num_base_bdevs_operational": 2, 00:13:55.147 "process": { 00:13:55.147 "type": "rebuild", 00:13:55.147 "target": "spare", 00:13:55.147 "progress": { 00:13:55.147 "blocks": 10240, 00:13:55.147 "percent": 16 00:13:55.147 } 00:13:55.147 }, 00:13:55.147 "base_bdevs_list": [ 00:13:55.147 { 00:13:55.147 "name": "spare", 00:13:55.147 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:13:55.147 "is_configured": true, 00:13:55.147 "data_offset": 2048, 00:13:55.147 "data_size": 63488 00:13:55.147 }, 00:13:55.147 { 00:13:55.147 "name": "BaseBdev2", 00:13:55.147 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:55.147 "is_configured": true, 00:13:55.147 "data_offset": 2048, 00:13:55.147 "data_size": 63488 00:13:55.147 } 00:13:55.147 ] 00:13:55.147 }' 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.147 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.147 [2024-11-20 10:36:58.519776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.406 [2024-11-20 10:36:58.627966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:55.406 [2024-11-20 10:36:58.634460] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.406 [2024-11-20 10:36:58.647507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.406 [2024-11-20 10:36:58.647559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.406 [2024-11-20 10:36:58.647588] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.406 [2024-11-20 10:36:58.700261] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.406 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.406 "name": "raid_bdev1", 00:13:55.406 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:55.406 "strip_size_kb": 0, 00:13:55.406 "state": "online", 00:13:55.406 "raid_level": "raid1", 00:13:55.407 "superblock": true, 00:13:55.407 "num_base_bdevs": 2, 00:13:55.407 "num_base_bdevs_discovered": 1, 00:13:55.407 "num_base_bdevs_operational": 1, 00:13:55.407 "base_bdevs_list": [ 00:13:55.407 { 00:13:55.407 "name": null, 00:13:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.407 "is_configured": false, 00:13:55.407 "data_offset": 0, 00:13:55.407 "data_size": 63488 00:13:55.407 }, 00:13:55.407 { 00:13:55.407 "name": "BaseBdev2", 00:13:55.407 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:55.407 "is_configured": true, 00:13:55.407 "data_offset": 2048, 00:13:55.407 "data_size": 63488 00:13:55.407 } 00:13:55.407 ] 00:13:55.407 }' 00:13:55.407 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.407 10:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.922 172.50 IOPS, 517.50 MiB/s [2024-11-20T10:36:59.401Z] 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.922 "name": "raid_bdev1", 00:13:55.922 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:55.922 "strip_size_kb": 0, 00:13:55.922 "state": "online", 00:13:55.922 "raid_level": "raid1", 00:13:55.922 "superblock": true, 00:13:55.922 "num_base_bdevs": 2, 00:13:55.922 "num_base_bdevs_discovered": 1, 00:13:55.922 "num_base_bdevs_operational": 1, 00:13:55.922 "base_bdevs_list": [ 00:13:55.922 { 00:13:55.922 "name": null, 00:13:55.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.922 "is_configured": false, 00:13:55.922 "data_offset": 0, 00:13:55.922 "data_size": 63488 00:13:55.922 }, 00:13:55.922 { 00:13:55.922 "name": "BaseBdev2", 00:13:55.922 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:55.922 "is_configured": true, 00:13:55.922 "data_offset": 2048, 00:13:55.922 "data_size": 63488 00:13:55.922 } 00:13:55.922 ] 00:13:55.922 }' 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.922 [2024-11-20 10:36:59.344550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.922 10:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:55.922 [2024-11-20 10:36:59.395160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:55.922 [2024-11-20 10:36:59.397024] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.180 [2024-11-20 10:36:59.505660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:56.180 [2024-11-20 10:36:59.506282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:56.440 [2024-11-20 10:36:59.726269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.440 [2024-11-20 10:36:59.726640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.958 159.33 IOPS, 478.00 MiB/s [2024-11-20T10:37:00.437Z] [2024-11-20 10:37:00.190822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.958 [2024-11-20 10:37:00.191191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.958 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.219 "name": "raid_bdev1", 00:13:57.219 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:57.219 "strip_size_kb": 0, 00:13:57.219 "state": "online", 00:13:57.219 "raid_level": "raid1", 00:13:57.219 "superblock": true, 00:13:57.219 "num_base_bdevs": 2, 00:13:57.219 "num_base_bdevs_discovered": 2, 00:13:57.219 "num_base_bdevs_operational": 2, 00:13:57.219 "process": { 00:13:57.219 "type": "rebuild", 00:13:57.219 "target": "spare", 00:13:57.219 "progress": { 00:13:57.219 "blocks": 12288, 00:13:57.219 "percent": 19 00:13:57.219 } 00:13:57.219 }, 00:13:57.219 "base_bdevs_list": [ 00:13:57.219 { 00:13:57.219 "name": "spare", 00:13:57.219 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:13:57.219 "is_configured": true, 00:13:57.219 "data_offset": 2048, 00:13:57.219 "data_size": 63488 00:13:57.219 }, 00:13:57.219 { 00:13:57.219 "name": "BaseBdev2", 00:13:57.219 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:57.219 "is_configured": true, 00:13:57.219 "data_offset": 2048, 00:13:57.219 "data_size": 63488 00:13:57.219 } 00:13:57.219 ] 00:13:57.219 }' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:57.219 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=425 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.219 "name": "raid_bdev1", 00:13:57.219 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:57.219 "strip_size_kb": 0, 00:13:57.219 "state": "online", 00:13:57.219 "raid_level": "raid1", 00:13:57.219 "superblock": true, 00:13:57.219 "num_base_bdevs": 2, 00:13:57.219 "num_base_bdevs_discovered": 2, 00:13:57.219 "num_base_bdevs_operational": 2, 00:13:57.219 "process": { 00:13:57.219 "type": "rebuild", 00:13:57.219 "target": "spare", 00:13:57.219 "progress": { 00:13:57.219 "blocks": 14336, 00:13:57.219 "percent": 22 00:13:57.219 } 00:13:57.219 }, 00:13:57.219 "base_bdevs_list": [ 00:13:57.219 { 00:13:57.219 "name": "spare", 00:13:57.219 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:13:57.219 "is_configured": true, 00:13:57.219 "data_offset": 2048, 00:13:57.219 "data_size": 63488 00:13:57.219 }, 00:13:57.219 { 00:13:57.219 "name": "BaseBdev2", 00:13:57.219 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:57.219 "is_configured": true, 00:13:57.219 "data_offset": 2048, 00:13:57.219 "data_size": 63488 00:13:57.219 } 00:13:57.219 ] 00:13:57.219 }' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.219 [2024-11-20 10:37:00.633541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.219 10:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.479 [2024-11-20 10:37:00.843649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:57.742 147.50 IOPS, 442.50 MiB/s [2024-11-20T10:37:01.221Z] [2024-11-20 10:37:01.071199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:57.742 [2024-11-20 10:37:01.071569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.340 "name": "raid_bdev1", 00:13:58.340 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:58.340 "strip_size_kb": 0, 00:13:58.340 "state": "online", 00:13:58.340 "raid_level": "raid1", 00:13:58.340 "superblock": true, 00:13:58.340 "num_base_bdevs": 2, 00:13:58.340 "num_base_bdevs_discovered": 2, 00:13:58.340 "num_base_bdevs_operational": 2, 00:13:58.340 "process": { 00:13:58.340 "type": "rebuild", 00:13:58.340 "target": "spare", 00:13:58.340 "progress": { 00:13:58.340 "blocks": 30720, 00:13:58.340 "percent": 48 00:13:58.340 } 00:13:58.340 }, 00:13:58.340 "base_bdevs_list": [ 00:13:58.340 { 00:13:58.340 "name": "spare", 00:13:58.340 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:13:58.340 "is_configured": true, 00:13:58.340 "data_offset": 2048, 00:13:58.340 "data_size": 63488 00:13:58.340 }, 00:13:58.340 { 00:13:58.340 "name": "BaseBdev2", 00:13:58.340 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:58.340 "is_configured": true, 00:13:58.340 "data_offset": 2048, 00:13:58.340 "data_size": 63488 00:13:58.340 } 00:13:58.340 ] 00:13:58.340 }' 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.340 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.600 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.600 10:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.860 126.60 IOPS, 379.80 MiB/s [2024-11-20T10:37:02.339Z] [2024-11-20 10:37:02.110072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:58.860 [2024-11-20 10:37:02.223792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:58.860 [2024-11-20 10:37:02.224042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:59.119 [2024-11-20 10:37:02.555891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:59.378 [2024-11-20 10:37:02.677667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.378 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.639 "name": "raid_bdev1", 00:13:59.639 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:13:59.639 "strip_size_kb": 0, 00:13:59.639 "state": "online", 00:13:59.639 "raid_level": "raid1", 00:13:59.639 "superblock": true, 00:13:59.639 "num_base_bdevs": 2, 00:13:59.639 "num_base_bdevs_discovered": 2, 00:13:59.639 "num_base_bdevs_operational": 2, 00:13:59.639 "process": { 00:13:59.639 "type": "rebuild", 00:13:59.639 "target": "spare", 00:13:59.639 "progress": { 00:13:59.639 "blocks": 47104, 00:13:59.639 "percent": 74 00:13:59.639 } 00:13:59.639 }, 00:13:59.639 "base_bdevs_list": [ 00:13:59.639 { 00:13:59.639 "name": "spare", 00:13:59.639 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:13:59.639 "is_configured": true, 00:13:59.639 "data_offset": 2048, 00:13:59.639 "data_size": 63488 00:13:59.639 }, 00:13:59.639 { 00:13:59.639 "name": "BaseBdev2", 00:13:59.639 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:13:59.639 "is_configured": true, 00:13:59.639 "data_offset": 2048, 00:13:59.639 "data_size": 63488 00:13:59.639 } 00:13:59.639 ] 00:13:59.639 }' 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.639 10:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.639 112.83 IOPS, 338.50 MiB/s [2024-11-20T10:37:03.118Z] [2024-11-20 10:37:03.008617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:59.898 [2024-11-20 10:37:03.210790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:59.898 [2024-11-20 10:37:03.211141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:00.157 [2024-11-20 10:37:03.535584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:00.416 [2024-11-20 10:37:03.759807] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:00.416 [2024-11-20 10:37:03.859731] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:00.416 [2024-11-20 10:37:03.862066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.675 10:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.675 102.57 IOPS, 307.71 MiB/s [2024-11-20T10:37:04.154Z] 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.675 "name": "raid_bdev1", 00:14:00.675 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:00.675 "strip_size_kb": 0, 00:14:00.675 "state": "online", 00:14:00.675 "raid_level": "raid1", 00:14:00.675 "superblock": true, 00:14:00.675 "num_base_bdevs": 2, 00:14:00.675 "num_base_bdevs_discovered": 2, 00:14:00.675 "num_base_bdevs_operational": 2, 00:14:00.675 "base_bdevs_list": [ 00:14:00.675 { 00:14:00.675 "name": "spare", 00:14:00.675 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:00.675 "is_configured": true, 00:14:00.675 "data_offset": 2048, 00:14:00.675 "data_size": 63488 00:14:00.675 }, 00:14:00.675 { 00:14:00.675 "name": "BaseBdev2", 00:14:00.675 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:00.675 "is_configured": true, 00:14:00.675 "data_offset": 2048, 00:14:00.675 "data_size": 63488 00:14:00.675 } 00:14:00.675 ] 00:14:00.675 }' 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.675 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.935 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.935 "name": "raid_bdev1", 00:14:00.935 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:00.935 "strip_size_kb": 0, 00:14:00.935 "state": "online", 00:14:00.935 "raid_level": "raid1", 00:14:00.935 "superblock": true, 00:14:00.935 "num_base_bdevs": 2, 00:14:00.935 "num_base_bdevs_discovered": 2, 00:14:00.935 "num_base_bdevs_operational": 2, 00:14:00.935 "base_bdevs_list": [ 00:14:00.935 { 00:14:00.935 "name": "spare", 00:14:00.935 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:00.935 "is_configured": true, 00:14:00.935 "data_offset": 2048, 00:14:00.935 "data_size": 63488 00:14:00.935 }, 00:14:00.935 { 00:14:00.935 "name": "BaseBdev2", 00:14:00.935 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:00.935 "is_configured": true, 00:14:00.935 "data_offset": 2048, 00:14:00.935 "data_size": 63488 00:14:00.935 } 00:14:00.935 ] 00:14:00.935 }' 00:14:00.935 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.935 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.935 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.935 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.935 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.936 "name": "raid_bdev1", 00:14:00.936 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:00.936 "strip_size_kb": 0, 00:14:00.936 "state": "online", 00:14:00.936 "raid_level": "raid1", 00:14:00.936 "superblock": true, 00:14:00.936 "num_base_bdevs": 2, 00:14:00.936 "num_base_bdevs_discovered": 2, 00:14:00.936 "num_base_bdevs_operational": 2, 00:14:00.936 "base_bdevs_list": [ 00:14:00.936 { 00:14:00.936 "name": "spare", 00:14:00.936 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:00.936 "is_configured": true, 00:14:00.936 "data_offset": 2048, 00:14:00.936 "data_size": 63488 00:14:00.936 }, 00:14:00.936 { 00:14:00.936 "name": "BaseBdev2", 00:14:00.936 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:00.936 "is_configured": true, 00:14:00.936 "data_offset": 2048, 00:14:00.936 "data_size": 63488 00:14:00.936 } 00:14:00.936 ] 00:14:00.936 }' 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.936 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 [2024-11-20 10:37:04.726879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.503 [2024-11-20 10:37:04.726915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.503 00:14:01.503 Latency(us) 00:14:01.503 [2024-11-20T10:37:04.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.503 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:01.503 raid_bdev1 : 7.79 97.09 291.28 0.00 0.00 13283.64 304.07 140115.40 00:14:01.503 [2024-11-20T10:37:04.982Z] =================================================================================================================== 00:14:01.503 [2024-11-20T10:37:04.982Z] Total : 97.09 291.28 0.00 0.00 13283.64 304.07 140115.40 00:14:01.503 [2024-11-20 10:37:04.796473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.503 [2024-11-20 10:37:04.796537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.503 [2024-11-20 10:37:04.796617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.503 [2024-11-20 10:37:04.796627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:01.503 { 00:14:01.503 "results": [ 00:14:01.503 { 00:14:01.503 "job": "raid_bdev1", 00:14:01.503 "core_mask": "0x1", 00:14:01.503 "workload": "randrw", 00:14:01.503 "percentage": 50, 00:14:01.503 "status": "finished", 00:14:01.503 "queue_depth": 2, 00:14:01.503 "io_size": 3145728, 00:14:01.503 "runtime": 7.786372, 00:14:01.503 "iops": 97.09271532364495, 00:14:01.503 "mibps": 291.2781459709349, 00:14:01.503 "io_failed": 0, 00:14:01.503 "io_timeout": 0, 00:14:01.503 "avg_latency_us": 13283.639546221204, 00:14:01.503 "min_latency_us": 304.0698689956332, 00:14:01.503 "max_latency_us": 140115.39563318779 00:14:01.503 } 00:14:01.503 ], 00:14:01.503 "core_count": 1 00:14:01.503 } 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.503 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:01.504 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.504 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.504 10:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:01.761 /dev/nbd0 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.761 1+0 records in 00:14:01.761 1+0 records out 00:14:01.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409026 s, 10.0 MB/s 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:01.761 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.762 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:02.021 /dev/nbd1 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.021 1+0 records in 00:14:02.021 1+0 records out 00:14:02.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206023 s, 19.9 MB/s 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.021 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.285 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.544 [2024-11-20 10:37:05.975062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.544 [2024-11-20 10:37:05.975114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.544 [2024-11-20 10:37:05.975136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:02.544 [2024-11-20 10:37:05.975145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.544 [2024-11-20 10:37:05.977334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.544 [2024-11-20 10:37:05.977388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.544 [2024-11-20 10:37:05.977482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.544 [2024-11-20 10:37:05.977535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.544 [2024-11-20 10:37:05.977677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.544 spare 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.544 10:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.812 [2024-11-20 10:37:06.077575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.812 [2024-11-20 10:37:06.077626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.812 [2024-11-20 10:37:06.077957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:02.812 [2024-11-20 10:37:06.078158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.812 [2024-11-20 10:37:06.078171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:02.812 [2024-11-20 10:37:06.078412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.812 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.812 "name": "raid_bdev1", 00:14:02.812 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:02.812 "strip_size_kb": 0, 00:14:02.812 "state": "online", 00:14:02.812 "raid_level": "raid1", 00:14:02.812 "superblock": true, 00:14:02.812 "num_base_bdevs": 2, 00:14:02.812 "num_base_bdevs_discovered": 2, 00:14:02.812 "num_base_bdevs_operational": 2, 00:14:02.812 "base_bdevs_list": [ 00:14:02.812 { 00:14:02.812 "name": "spare", 00:14:02.813 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:02.813 "is_configured": true, 00:14:02.813 "data_offset": 2048, 00:14:02.813 "data_size": 63488 00:14:02.813 }, 00:14:02.813 { 00:14:02.813 "name": "BaseBdev2", 00:14:02.813 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:02.813 "is_configured": true, 00:14:02.813 "data_offset": 2048, 00:14:02.813 "data_size": 63488 00:14:02.813 } 00:14:02.813 ] 00:14:02.813 }' 00:14:02.813 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.813 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.073 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.333 "name": "raid_bdev1", 00:14:03.333 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:03.333 "strip_size_kb": 0, 00:14:03.333 "state": "online", 00:14:03.333 "raid_level": "raid1", 00:14:03.333 "superblock": true, 00:14:03.333 "num_base_bdevs": 2, 00:14:03.333 "num_base_bdevs_discovered": 2, 00:14:03.333 "num_base_bdevs_operational": 2, 00:14:03.333 "base_bdevs_list": [ 00:14:03.333 { 00:14:03.333 "name": "spare", 00:14:03.333 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:03.333 "is_configured": true, 00:14:03.333 "data_offset": 2048, 00:14:03.333 "data_size": 63488 00:14:03.333 }, 00:14:03.333 { 00:14:03.333 "name": "BaseBdev2", 00:14:03.333 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:03.333 "is_configured": true, 00:14:03.333 "data_offset": 2048, 00:14:03.333 "data_size": 63488 00:14:03.333 } 00:14:03.333 ] 00:14:03.333 }' 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.333 [2024-11-20 10:37:06.694012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.333 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.333 "name": "raid_bdev1", 00:14:03.334 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:03.334 "strip_size_kb": 0, 00:14:03.334 "state": "online", 00:14:03.334 "raid_level": "raid1", 00:14:03.334 "superblock": true, 00:14:03.334 "num_base_bdevs": 2, 00:14:03.334 "num_base_bdevs_discovered": 1, 00:14:03.334 "num_base_bdevs_operational": 1, 00:14:03.334 "base_bdevs_list": [ 00:14:03.334 { 00:14:03.334 "name": null, 00:14:03.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.334 "is_configured": false, 00:14:03.334 "data_offset": 0, 00:14:03.334 "data_size": 63488 00:14:03.334 }, 00:14:03.334 { 00:14:03.334 "name": "BaseBdev2", 00:14:03.334 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:03.334 "is_configured": true, 00:14:03.334 "data_offset": 2048, 00:14:03.334 "data_size": 63488 00:14:03.334 } 00:14:03.334 ] 00:14:03.334 }' 00:14:03.334 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.334 10:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.901 10:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.901 10:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.901 10:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.901 [2024-11-20 10:37:07.121380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.901 [2024-11-20 10:37:07.121587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:03.901 [2024-11-20 10:37:07.121611] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.901 [2024-11-20 10:37:07.121648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.901 [2024-11-20 10:37:07.138293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:03.901 10:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.901 10:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:03.901 [2024-11-20 10:37:07.140315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.838 "name": "raid_bdev1", 00:14:04.838 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:04.838 "strip_size_kb": 0, 00:14:04.838 "state": "online", 00:14:04.838 "raid_level": "raid1", 00:14:04.838 "superblock": true, 00:14:04.838 "num_base_bdevs": 2, 00:14:04.838 "num_base_bdevs_discovered": 2, 00:14:04.838 "num_base_bdevs_operational": 2, 00:14:04.838 "process": { 00:14:04.838 "type": "rebuild", 00:14:04.838 "target": "spare", 00:14:04.838 "progress": { 00:14:04.838 "blocks": 20480, 00:14:04.838 "percent": 32 00:14:04.838 } 00:14:04.838 }, 00:14:04.838 "base_bdevs_list": [ 00:14:04.838 { 00:14:04.838 "name": "spare", 00:14:04.838 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:04.838 "is_configured": true, 00:14:04.838 "data_offset": 2048, 00:14:04.838 "data_size": 63488 00:14:04.838 }, 00:14:04.838 { 00:14:04.838 "name": "BaseBdev2", 00:14:04.838 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:04.838 "is_configured": true, 00:14:04.838 "data_offset": 2048, 00:14:04.838 "data_size": 63488 00:14:04.838 } 00:14:04.838 ] 00:14:04.838 }' 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.838 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.838 [2024-11-20 10:37:08.279817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.097 [2024-11-20 10:37:08.346024] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.097 [2024-11-20 10:37:08.346093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.097 [2024-11-20 10:37:08.346109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.097 [2024-11-20 10:37:08.346121] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.097 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.098 "name": "raid_bdev1", 00:14:05.098 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:05.098 "strip_size_kb": 0, 00:14:05.098 "state": "online", 00:14:05.098 "raid_level": "raid1", 00:14:05.098 "superblock": true, 00:14:05.098 "num_base_bdevs": 2, 00:14:05.098 "num_base_bdevs_discovered": 1, 00:14:05.098 "num_base_bdevs_operational": 1, 00:14:05.098 "base_bdevs_list": [ 00:14:05.098 { 00:14:05.098 "name": null, 00:14:05.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.098 "is_configured": false, 00:14:05.098 "data_offset": 0, 00:14:05.098 "data_size": 63488 00:14:05.098 }, 00:14:05.098 { 00:14:05.098 "name": "BaseBdev2", 00:14:05.098 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:05.098 "is_configured": true, 00:14:05.098 "data_offset": 2048, 00:14:05.098 "data_size": 63488 00:14:05.098 } 00:14:05.098 ] 00:14:05.098 }' 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.098 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.357 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.357 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.357 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.357 [2024-11-20 10:37:08.800406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.357 [2024-11-20 10:37:08.800483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.357 [2024-11-20 10:37:08.800509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:05.357 [2024-11-20 10:37:08.800520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.357 [2024-11-20 10:37:08.801004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.357 [2024-11-20 10:37:08.801025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.357 [2024-11-20 10:37:08.801123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.357 [2024-11-20 10:37:08.801139] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.357 [2024-11-20 10:37:08.801148] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.357 [2024-11-20 10:37:08.801171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.357 [2024-11-20 10:37:08.817425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:05.357 spare 00:14:05.357 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.357 [2024-11-20 10:37:08.819253] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.357 10:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.736 "name": "raid_bdev1", 00:14:06.736 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:06.736 "strip_size_kb": 0, 00:14:06.736 "state": "online", 00:14:06.736 "raid_level": "raid1", 00:14:06.736 "superblock": true, 00:14:06.736 "num_base_bdevs": 2, 00:14:06.736 "num_base_bdevs_discovered": 2, 00:14:06.736 "num_base_bdevs_operational": 2, 00:14:06.736 "process": { 00:14:06.736 "type": "rebuild", 00:14:06.736 "target": "spare", 00:14:06.736 "progress": { 00:14:06.736 "blocks": 20480, 00:14:06.736 "percent": 32 00:14:06.736 } 00:14:06.736 }, 00:14:06.736 "base_bdevs_list": [ 00:14:06.736 { 00:14:06.736 "name": "spare", 00:14:06.736 "uuid": "cff5c4a0-83a6-59a8-93b2-20856a5b8c40", 00:14:06.736 "is_configured": true, 00:14:06.736 "data_offset": 2048, 00:14:06.736 "data_size": 63488 00:14:06.736 }, 00:14:06.736 { 00:14:06.736 "name": "BaseBdev2", 00:14:06.736 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:06.736 "is_configured": true, 00:14:06.736 "data_offset": 2048, 00:14:06.736 "data_size": 63488 00:14:06.736 } 00:14:06.736 ] 00:14:06.736 }' 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.736 10:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.736 [2024-11-20 10:37:09.963928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.736 [2024-11-20 10:37:10.024849] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.736 [2024-11-20 10:37:10.024914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.736 [2024-11-20 10:37:10.024935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.736 [2024-11-20 10:37:10.024942] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.736 "name": "raid_bdev1", 00:14:06.736 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:06.736 "strip_size_kb": 0, 00:14:06.736 "state": "online", 00:14:06.736 "raid_level": "raid1", 00:14:06.736 "superblock": true, 00:14:06.736 "num_base_bdevs": 2, 00:14:06.736 "num_base_bdevs_discovered": 1, 00:14:06.736 "num_base_bdevs_operational": 1, 00:14:06.736 "base_bdevs_list": [ 00:14:06.736 { 00:14:06.736 "name": null, 00:14:06.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.736 "is_configured": false, 00:14:06.736 "data_offset": 0, 00:14:06.736 "data_size": 63488 00:14:06.736 }, 00:14:06.736 { 00:14:06.736 "name": "BaseBdev2", 00:14:06.736 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:06.736 "is_configured": true, 00:14:06.736 "data_offset": 2048, 00:14:06.736 "data_size": 63488 00:14:06.736 } 00:14:06.736 ] 00:14:06.736 }' 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.736 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.305 "name": "raid_bdev1", 00:14:07.305 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:07.305 "strip_size_kb": 0, 00:14:07.305 "state": "online", 00:14:07.305 "raid_level": "raid1", 00:14:07.305 "superblock": true, 00:14:07.305 "num_base_bdevs": 2, 00:14:07.305 "num_base_bdevs_discovered": 1, 00:14:07.305 "num_base_bdevs_operational": 1, 00:14:07.305 "base_bdevs_list": [ 00:14:07.305 { 00:14:07.305 "name": null, 00:14:07.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.305 "is_configured": false, 00:14:07.305 "data_offset": 0, 00:14:07.305 "data_size": 63488 00:14:07.305 }, 00:14:07.305 { 00:14:07.305 "name": "BaseBdev2", 00:14:07.305 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:07.305 "is_configured": true, 00:14:07.305 "data_offset": 2048, 00:14:07.305 "data_size": 63488 00:14:07.305 } 00:14:07.305 ] 00:14:07.305 }' 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.305 [2024-11-20 10:37:10.644219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.305 [2024-11-20 10:37:10.644287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.305 [2024-11-20 10:37:10.644311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:07.305 [2024-11-20 10:37:10.644320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.305 [2024-11-20 10:37:10.644789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.305 [2024-11-20 10:37:10.644812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.305 [2024-11-20 10:37:10.644905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:07.305 [2024-11-20 10:37:10.644920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.305 [2024-11-20 10:37:10.644930] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.305 [2024-11-20 10:37:10.644940] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:07.305 BaseBdev1 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.305 10:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.244 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.244 "name": "raid_bdev1", 00:14:08.244 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:08.244 "strip_size_kb": 0, 00:14:08.244 "state": "online", 00:14:08.244 "raid_level": "raid1", 00:14:08.244 "superblock": true, 00:14:08.244 "num_base_bdevs": 2, 00:14:08.244 "num_base_bdevs_discovered": 1, 00:14:08.244 "num_base_bdevs_operational": 1, 00:14:08.244 "base_bdevs_list": [ 00:14:08.244 { 00:14:08.244 "name": null, 00:14:08.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.244 "is_configured": false, 00:14:08.245 "data_offset": 0, 00:14:08.245 "data_size": 63488 00:14:08.245 }, 00:14:08.245 { 00:14:08.245 "name": "BaseBdev2", 00:14:08.245 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:08.245 "is_configured": true, 00:14:08.245 "data_offset": 2048, 00:14:08.245 "data_size": 63488 00:14:08.245 } 00:14:08.245 ] 00:14:08.245 }' 00:14:08.245 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.245 10:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.812 "name": "raid_bdev1", 00:14:08.812 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:08.812 "strip_size_kb": 0, 00:14:08.812 "state": "online", 00:14:08.812 "raid_level": "raid1", 00:14:08.812 "superblock": true, 00:14:08.812 "num_base_bdevs": 2, 00:14:08.812 "num_base_bdevs_discovered": 1, 00:14:08.812 "num_base_bdevs_operational": 1, 00:14:08.812 "base_bdevs_list": [ 00:14:08.812 { 00:14:08.812 "name": null, 00:14:08.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.812 "is_configured": false, 00:14:08.812 "data_offset": 0, 00:14:08.812 "data_size": 63488 00:14:08.812 }, 00:14:08.812 { 00:14:08.812 "name": "BaseBdev2", 00:14:08.812 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:08.812 "is_configured": true, 00:14:08.812 "data_offset": 2048, 00:14:08.812 "data_size": 63488 00:14:08.812 } 00:14:08.812 ] 00:14:08.812 }' 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.812 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 [2024-11-20 10:37:12.237717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.812 [2024-11-20 10:37:12.237885] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:08.812 [2024-11-20 10:37:12.237913] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:08.812 request: 00:14:08.812 { 00:14:08.812 "base_bdev": "BaseBdev1", 00:14:08.812 "raid_bdev": "raid_bdev1", 00:14:08.813 "method": "bdev_raid_add_base_bdev", 00:14:08.813 "req_id": 1 00:14:08.813 } 00:14:08.813 Got JSON-RPC error response 00:14:08.813 response: 00:14:08.813 { 00:14:08.813 "code": -22, 00:14:08.813 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:08.813 } 00:14:08.813 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:08.813 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:08.813 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.813 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.813 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.813 10:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.210 "name": "raid_bdev1", 00:14:10.210 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:10.210 "strip_size_kb": 0, 00:14:10.210 "state": "online", 00:14:10.210 "raid_level": "raid1", 00:14:10.210 "superblock": true, 00:14:10.210 "num_base_bdevs": 2, 00:14:10.210 "num_base_bdevs_discovered": 1, 00:14:10.210 "num_base_bdevs_operational": 1, 00:14:10.210 "base_bdevs_list": [ 00:14:10.210 { 00:14:10.210 "name": null, 00:14:10.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.210 "is_configured": false, 00:14:10.210 "data_offset": 0, 00:14:10.210 "data_size": 63488 00:14:10.210 }, 00:14:10.210 { 00:14:10.210 "name": "BaseBdev2", 00:14:10.210 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:10.210 "is_configured": true, 00:14:10.210 "data_offset": 2048, 00:14:10.210 "data_size": 63488 00:14:10.210 } 00:14:10.210 ] 00:14:10.210 }' 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.210 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.470 "name": "raid_bdev1", 00:14:10.470 "uuid": "291e2297-cc75-449a-8ac6-413afbdd50dd", 00:14:10.470 "strip_size_kb": 0, 00:14:10.470 "state": "online", 00:14:10.470 "raid_level": "raid1", 00:14:10.470 "superblock": true, 00:14:10.470 "num_base_bdevs": 2, 00:14:10.470 "num_base_bdevs_discovered": 1, 00:14:10.470 "num_base_bdevs_operational": 1, 00:14:10.470 "base_bdevs_list": [ 00:14:10.470 { 00:14:10.470 "name": null, 00:14:10.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.470 "is_configured": false, 00:14:10.470 "data_offset": 0, 00:14:10.470 "data_size": 63488 00:14:10.470 }, 00:14:10.470 { 00:14:10.470 "name": "BaseBdev2", 00:14:10.470 "uuid": "a21e3c50-67dd-5c60-86f9-b91788fa5d84", 00:14:10.470 "is_configured": true, 00:14:10.470 "data_offset": 2048, 00:14:10.470 "data_size": 63488 00:14:10.470 } 00:14:10.470 ] 00:14:10.470 }' 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77018 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77018 ']' 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77018 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77018 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77018' 00:14:10.470 killing process with pid 77018 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77018 00:14:10.470 Received shutdown signal, test time was about 16.878110 seconds 00:14:10.470 00:14:10.470 Latency(us) 00:14:10.470 [2024-11-20T10:37:13.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.470 [2024-11-20T10:37:13.949Z] =================================================================================================================== 00:14:10.470 [2024-11-20T10:37:13.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.470 [2024-11-20 10:37:13.848914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.470 [2024-11-20 10:37:13.849049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.470 10:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77018 00:14:10.470 [2024-11-20 10:37:13.849105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.470 [2024-11-20 10:37:13.849117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:10.730 [2024-11-20 10:37:14.086645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.135 00:14:12.135 real 0m20.042s 00:14:12.135 user 0m26.234s 00:14:12.135 sys 0m2.076s 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.135 ************************************ 00:14:12.135 END TEST raid_rebuild_test_sb_io 00:14:12.135 ************************************ 00:14:12.135 10:37:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:12.135 10:37:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:12.135 10:37:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.135 10:37:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.135 10:37:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.135 ************************************ 00:14:12.135 START TEST raid_rebuild_test 00:14:12.135 ************************************ 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.135 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77701 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77701 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77701 ']' 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.136 10:37:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.136 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.136 Zero copy mechanism will not be used. 00:14:12.136 [2024-11-20 10:37:15.423030] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:12.136 [2024-11-20 10:37:15.423153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77701 ] 00:14:12.136 [2024-11-20 10:37:15.598885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.395 [2024-11-20 10:37:15.714422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.690 [2024-11-20 10:37:15.913477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.691 [2024-11-20 10:37:15.913622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 BaseBdev1_malloc 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 [2024-11-20 10:37:16.318621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.951 [2024-11-20 10:37:16.318691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.951 [2024-11-20 10:37:16.318717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.951 [2024-11-20 10:37:16.318729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.951 [2024-11-20 10:37:16.321153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.951 [2024-11-20 10:37:16.321197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.951 BaseBdev1 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 BaseBdev2_malloc 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 [2024-11-20 10:37:16.374799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.951 [2024-11-20 10:37:16.374861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.951 [2024-11-20 10:37:16.374880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.951 [2024-11-20 10:37:16.374892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.951 [2024-11-20 10:37:16.377093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.951 [2024-11-20 10:37:16.377133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.951 BaseBdev2 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.951 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 BaseBdev3_malloc 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 [2024-11-20 10:37:16.443110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:13.211 [2024-11-20 10:37:16.443177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.211 [2024-11-20 10:37:16.443204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:13.211 [2024-11-20 10:37:16.443217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.211 [2024-11-20 10:37:16.445516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.211 [2024-11-20 10:37:16.445556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.211 BaseBdev3 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 BaseBdev4_malloc 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 [2024-11-20 10:37:16.498585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:13.211 [2024-11-20 10:37:16.498692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.211 [2024-11-20 10:37:16.498717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:13.211 [2024-11-20 10:37:16.498729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.211 [2024-11-20 10:37:16.500986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.211 [2024-11-20 10:37:16.501029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:13.211 BaseBdev4 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 spare_malloc 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 spare_delay 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 [2024-11-20 10:37:16.565887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.211 [2024-11-20 10:37:16.565948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.211 [2024-11-20 10:37:16.565970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:13.211 [2024-11-20 10:37:16.565980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.211 [2024-11-20 10:37:16.568064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.211 [2024-11-20 10:37:16.568173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.211 spare 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 [2024-11-20 10:37:16.577955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.211 [2024-11-20 10:37:16.580030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.211 [2024-11-20 10:37:16.580109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.211 [2024-11-20 10:37:16.580169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.211 [2024-11-20 10:37:16.580276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:13.211 [2024-11-20 10:37:16.580298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:13.211 [2024-11-20 10:37:16.580613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:13.211 [2024-11-20 10:37:16.580825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:13.211 [2024-11-20 10:37:16.580862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:13.211 [2024-11-20 10:37:16.581039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.211 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.212 "name": "raid_bdev1", 00:14:13.212 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:13.212 "strip_size_kb": 0, 00:14:13.212 "state": "online", 00:14:13.212 "raid_level": "raid1", 00:14:13.212 "superblock": false, 00:14:13.212 "num_base_bdevs": 4, 00:14:13.212 "num_base_bdevs_discovered": 4, 00:14:13.212 "num_base_bdevs_operational": 4, 00:14:13.212 "base_bdevs_list": [ 00:14:13.212 { 00:14:13.212 "name": "BaseBdev1", 00:14:13.212 "uuid": "9bd43556-80fc-51cc-adb3-56914d07bdab", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 }, 00:14:13.212 { 00:14:13.212 "name": "BaseBdev2", 00:14:13.212 "uuid": "ecd0e198-a81a-5a99-b8f7-59aff10ad9cb", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 }, 00:14:13.212 { 00:14:13.212 "name": "BaseBdev3", 00:14:13.212 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 }, 00:14:13.212 { 00:14:13.212 "name": "BaseBdev4", 00:14:13.212 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 } 00:14:13.212 ] 00:14:13.212 }' 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.212 10:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 [2024-11-20 10:37:17.017581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.781 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:14.039 [2024-11-20 10:37:17.288762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:14.039 /dev/nbd0 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.039 1+0 records in 00:14:14.039 1+0 records out 00:14:14.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532989 s, 7.7 MB/s 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:14.039 10:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:20.618 65536+0 records in 00:14:20.618 65536+0 records out 00:14:20.618 33554432 bytes (34 MB, 32 MiB) copied, 5.61481 s, 6.0 MB/s 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.618 10:37:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:20.618 [2024-11-20 10:37:23.181050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.618 [2024-11-20 10:37:23.222188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.618 "name": "raid_bdev1", 00:14:20.618 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:20.618 "strip_size_kb": 0, 00:14:20.618 "state": "online", 00:14:20.618 "raid_level": "raid1", 00:14:20.618 "superblock": false, 00:14:20.618 "num_base_bdevs": 4, 00:14:20.618 "num_base_bdevs_discovered": 3, 00:14:20.618 "num_base_bdevs_operational": 3, 00:14:20.618 "base_bdevs_list": [ 00:14:20.618 { 00:14:20.618 "name": null, 00:14:20.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.618 "is_configured": false, 00:14:20.618 "data_offset": 0, 00:14:20.618 "data_size": 65536 00:14:20.618 }, 00:14:20.618 { 00:14:20.618 "name": "BaseBdev2", 00:14:20.618 "uuid": "ecd0e198-a81a-5a99-b8f7-59aff10ad9cb", 00:14:20.618 "is_configured": true, 00:14:20.618 "data_offset": 0, 00:14:20.618 "data_size": 65536 00:14:20.618 }, 00:14:20.618 { 00:14:20.618 "name": "BaseBdev3", 00:14:20.618 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:20.618 "is_configured": true, 00:14:20.618 "data_offset": 0, 00:14:20.618 "data_size": 65536 00:14:20.618 }, 00:14:20.618 { 00:14:20.618 "name": "BaseBdev4", 00:14:20.618 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:20.618 "is_configured": true, 00:14:20.618 "data_offset": 0, 00:14:20.618 "data_size": 65536 00:14:20.618 } 00:14:20.618 ] 00:14:20.618 }' 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.618 [2024-11-20 10:37:23.673416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.618 [2024-11-20 10:37:23.688961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.618 10:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:20.619 [2024-11-20 10:37:23.690842] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.555 "name": "raid_bdev1", 00:14:21.555 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:21.555 "strip_size_kb": 0, 00:14:21.555 "state": "online", 00:14:21.555 "raid_level": "raid1", 00:14:21.555 "superblock": false, 00:14:21.555 "num_base_bdevs": 4, 00:14:21.555 "num_base_bdevs_discovered": 4, 00:14:21.555 "num_base_bdevs_operational": 4, 00:14:21.555 "process": { 00:14:21.555 "type": "rebuild", 00:14:21.555 "target": "spare", 00:14:21.555 "progress": { 00:14:21.555 "blocks": 20480, 00:14:21.555 "percent": 31 00:14:21.555 } 00:14:21.555 }, 00:14:21.555 "base_bdevs_list": [ 00:14:21.555 { 00:14:21.555 "name": "spare", 00:14:21.555 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": "BaseBdev2", 00:14:21.555 "uuid": "ecd0e198-a81a-5a99-b8f7-59aff10ad9cb", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": "BaseBdev3", 00:14:21.555 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": "BaseBdev4", 00:14:21.555 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 } 00:14:21.555 ] 00:14:21.555 }' 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.555 [2024-11-20 10:37:24.838136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.555 [2024-11-20 10:37:24.896462] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.555 [2024-11-20 10:37:24.896616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.555 [2024-11-20 10:37:24.896635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.555 [2024-11-20 10:37:24.896645] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.555 "name": "raid_bdev1", 00:14:21.555 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:21.555 "strip_size_kb": 0, 00:14:21.555 "state": "online", 00:14:21.555 "raid_level": "raid1", 00:14:21.555 "superblock": false, 00:14:21.555 "num_base_bdevs": 4, 00:14:21.555 "num_base_bdevs_discovered": 3, 00:14:21.555 "num_base_bdevs_operational": 3, 00:14:21.555 "base_bdevs_list": [ 00:14:21.555 { 00:14:21.555 "name": null, 00:14:21.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.555 "is_configured": false, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": "BaseBdev2", 00:14:21.555 "uuid": "ecd0e198-a81a-5a99-b8f7-59aff10ad9cb", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": "BaseBdev3", 00:14:21.555 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": "BaseBdev4", 00:14:21.555 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 0, 00:14:21.555 "data_size": 65536 00:14:21.555 } 00:14:21.555 ] 00:14:21.555 }' 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.555 10:37:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.123 "name": "raid_bdev1", 00:14:22.123 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:22.123 "strip_size_kb": 0, 00:14:22.123 "state": "online", 00:14:22.123 "raid_level": "raid1", 00:14:22.123 "superblock": false, 00:14:22.123 "num_base_bdevs": 4, 00:14:22.123 "num_base_bdevs_discovered": 3, 00:14:22.123 "num_base_bdevs_operational": 3, 00:14:22.123 "base_bdevs_list": [ 00:14:22.123 { 00:14:22.123 "name": null, 00:14:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.123 "is_configured": false, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 65536 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "BaseBdev2", 00:14:22.123 "uuid": "ecd0e198-a81a-5a99-b8f7-59aff10ad9cb", 00:14:22.123 "is_configured": true, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 65536 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "BaseBdev3", 00:14:22.123 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:22.123 "is_configured": true, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 65536 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "BaseBdev4", 00:14:22.123 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:22.123 "is_configured": true, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 65536 00:14:22.123 } 00:14:22.123 ] 00:14:22.123 }' 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.123 [2024-11-20 10:37:25.465965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.123 [2024-11-20 10:37:25.481052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.123 10:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:22.123 [2024-11-20 10:37:25.483090] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.059 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.317 "name": "raid_bdev1", 00:14:23.317 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:23.317 "strip_size_kb": 0, 00:14:23.317 "state": "online", 00:14:23.317 "raid_level": "raid1", 00:14:23.317 "superblock": false, 00:14:23.317 "num_base_bdevs": 4, 00:14:23.317 "num_base_bdevs_discovered": 4, 00:14:23.317 "num_base_bdevs_operational": 4, 00:14:23.317 "process": { 00:14:23.317 "type": "rebuild", 00:14:23.317 "target": "spare", 00:14:23.317 "progress": { 00:14:23.317 "blocks": 20480, 00:14:23.317 "percent": 31 00:14:23.317 } 00:14:23.317 }, 00:14:23.317 "base_bdevs_list": [ 00:14:23.317 { 00:14:23.317 "name": "spare", 00:14:23.317 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 }, 00:14:23.317 { 00:14:23.317 "name": "BaseBdev2", 00:14:23.317 "uuid": "ecd0e198-a81a-5a99-b8f7-59aff10ad9cb", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 }, 00:14:23.317 { 00:14:23.317 "name": "BaseBdev3", 00:14:23.317 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 }, 00:14:23.317 { 00:14:23.317 "name": "BaseBdev4", 00:14:23.317 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 } 00:14:23.317 ] 00:14:23.317 }' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.317 [2024-11-20 10:37:26.638425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.317 [2024-11-20 10:37:26.688674] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.317 "name": "raid_bdev1", 00:14:23.317 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:23.317 "strip_size_kb": 0, 00:14:23.317 "state": "online", 00:14:23.317 "raid_level": "raid1", 00:14:23.317 "superblock": false, 00:14:23.317 "num_base_bdevs": 4, 00:14:23.317 "num_base_bdevs_discovered": 3, 00:14:23.317 "num_base_bdevs_operational": 3, 00:14:23.317 "process": { 00:14:23.317 "type": "rebuild", 00:14:23.317 "target": "spare", 00:14:23.317 "progress": { 00:14:23.317 "blocks": 24576, 00:14:23.317 "percent": 37 00:14:23.317 } 00:14:23.317 }, 00:14:23.317 "base_bdevs_list": [ 00:14:23.317 { 00:14:23.317 "name": "spare", 00:14:23.317 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 }, 00:14:23.317 { 00:14:23.317 "name": null, 00:14:23.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.317 "is_configured": false, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 }, 00:14:23.317 { 00:14:23.317 "name": "BaseBdev3", 00:14:23.317 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 }, 00:14:23.317 { 00:14:23.317 "name": "BaseBdev4", 00:14:23.317 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:23.317 "is_configured": true, 00:14:23.317 "data_offset": 0, 00:14:23.317 "data_size": 65536 00:14:23.317 } 00:14:23.317 ] 00:14:23.317 }' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.317 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=451 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.576 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.576 "name": "raid_bdev1", 00:14:23.576 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:23.576 "strip_size_kb": 0, 00:14:23.576 "state": "online", 00:14:23.576 "raid_level": "raid1", 00:14:23.576 "superblock": false, 00:14:23.576 "num_base_bdevs": 4, 00:14:23.576 "num_base_bdevs_discovered": 3, 00:14:23.576 "num_base_bdevs_operational": 3, 00:14:23.576 "process": { 00:14:23.576 "type": "rebuild", 00:14:23.576 "target": "spare", 00:14:23.576 "progress": { 00:14:23.576 "blocks": 26624, 00:14:23.576 "percent": 40 00:14:23.576 } 00:14:23.576 }, 00:14:23.576 "base_bdevs_list": [ 00:14:23.576 { 00:14:23.576 "name": "spare", 00:14:23.576 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:23.577 "is_configured": true, 00:14:23.577 "data_offset": 0, 00:14:23.577 "data_size": 65536 00:14:23.577 }, 00:14:23.577 { 00:14:23.577 "name": null, 00:14:23.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.577 "is_configured": false, 00:14:23.577 "data_offset": 0, 00:14:23.577 "data_size": 65536 00:14:23.577 }, 00:14:23.577 { 00:14:23.577 "name": "BaseBdev3", 00:14:23.577 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:23.577 "is_configured": true, 00:14:23.577 "data_offset": 0, 00:14:23.577 "data_size": 65536 00:14:23.577 }, 00:14:23.577 { 00:14:23.577 "name": "BaseBdev4", 00:14:23.577 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:23.577 "is_configured": true, 00:14:23.577 "data_offset": 0, 00:14:23.577 "data_size": 65536 00:14:23.577 } 00:14:23.577 ] 00:14:23.577 }' 00:14:23.577 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.577 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.577 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.577 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.577 10:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.539 10:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.539 10:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.539 "name": "raid_bdev1", 00:14:24.539 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:24.539 "strip_size_kb": 0, 00:14:24.539 "state": "online", 00:14:24.539 "raid_level": "raid1", 00:14:24.539 "superblock": false, 00:14:24.539 "num_base_bdevs": 4, 00:14:24.539 "num_base_bdevs_discovered": 3, 00:14:24.539 "num_base_bdevs_operational": 3, 00:14:24.539 "process": { 00:14:24.539 "type": "rebuild", 00:14:24.539 "target": "spare", 00:14:24.539 "progress": { 00:14:24.539 "blocks": 49152, 00:14:24.539 "percent": 75 00:14:24.539 } 00:14:24.539 }, 00:14:24.539 "base_bdevs_list": [ 00:14:24.539 { 00:14:24.539 "name": "spare", 00:14:24.539 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:24.539 "is_configured": true, 00:14:24.539 "data_offset": 0, 00:14:24.539 "data_size": 65536 00:14:24.539 }, 00:14:24.539 { 00:14:24.539 "name": null, 00:14:24.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.539 "is_configured": false, 00:14:24.539 "data_offset": 0, 00:14:24.539 "data_size": 65536 00:14:24.539 }, 00:14:24.539 { 00:14:24.539 "name": "BaseBdev3", 00:14:24.539 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:24.539 "is_configured": true, 00:14:24.539 "data_offset": 0, 00:14:24.539 "data_size": 65536 00:14:24.539 }, 00:14:24.539 { 00:14:24.539 "name": "BaseBdev4", 00:14:24.539 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:24.539 "is_configured": true, 00:14:24.539 "data_offset": 0, 00:14:24.539 "data_size": 65536 00:14:24.539 } 00:14:24.539 ] 00:14:24.539 }' 00:14:24.539 10:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.797 10:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.797 10:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.797 10:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.797 10:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.365 [2024-11-20 10:37:28.698061] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:25.365 [2024-11-20 10:37:28.698234] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:25.365 [2024-11-20 10:37:28.698294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.930 "name": "raid_bdev1", 00:14:25.930 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:25.930 "strip_size_kb": 0, 00:14:25.930 "state": "online", 00:14:25.930 "raid_level": "raid1", 00:14:25.930 "superblock": false, 00:14:25.930 "num_base_bdevs": 4, 00:14:25.930 "num_base_bdevs_discovered": 3, 00:14:25.930 "num_base_bdevs_operational": 3, 00:14:25.930 "base_bdevs_list": [ 00:14:25.930 { 00:14:25.930 "name": "spare", 00:14:25.930 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:25.930 "is_configured": true, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 }, 00:14:25.930 { 00:14:25.930 "name": null, 00:14:25.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.930 "is_configured": false, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 }, 00:14:25.930 { 00:14:25.930 "name": "BaseBdev3", 00:14:25.930 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:25.930 "is_configured": true, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 }, 00:14:25.930 { 00:14:25.930 "name": "BaseBdev4", 00:14:25.930 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:25.930 "is_configured": true, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 } 00:14:25.930 ] 00:14:25.930 }' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.930 "name": "raid_bdev1", 00:14:25.930 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:25.930 "strip_size_kb": 0, 00:14:25.930 "state": "online", 00:14:25.930 "raid_level": "raid1", 00:14:25.930 "superblock": false, 00:14:25.930 "num_base_bdevs": 4, 00:14:25.930 "num_base_bdevs_discovered": 3, 00:14:25.930 "num_base_bdevs_operational": 3, 00:14:25.930 "base_bdevs_list": [ 00:14:25.930 { 00:14:25.930 "name": "spare", 00:14:25.930 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:25.930 "is_configured": true, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 }, 00:14:25.930 { 00:14:25.930 "name": null, 00:14:25.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.930 "is_configured": false, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 }, 00:14:25.930 { 00:14:25.930 "name": "BaseBdev3", 00:14:25.930 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:25.930 "is_configured": true, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 }, 00:14:25.930 { 00:14:25.930 "name": "BaseBdev4", 00:14:25.930 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:25.930 "is_configured": true, 00:14:25.930 "data_offset": 0, 00:14:25.930 "data_size": 65536 00:14:25.930 } 00:14:25.930 ] 00:14:25.930 }' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.930 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.188 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.188 "name": "raid_bdev1", 00:14:26.188 "uuid": "94a935ad-a089-4276-a03c-19e703d19dc5", 00:14:26.188 "strip_size_kb": 0, 00:14:26.188 "state": "online", 00:14:26.188 "raid_level": "raid1", 00:14:26.188 "superblock": false, 00:14:26.188 "num_base_bdevs": 4, 00:14:26.188 "num_base_bdevs_discovered": 3, 00:14:26.188 "num_base_bdevs_operational": 3, 00:14:26.188 "base_bdevs_list": [ 00:14:26.188 { 00:14:26.188 "name": "spare", 00:14:26.189 "uuid": "a9ff8e99-291d-566d-a297-8f126d8040d2", 00:14:26.189 "is_configured": true, 00:14:26.189 "data_offset": 0, 00:14:26.189 "data_size": 65536 00:14:26.189 }, 00:14:26.189 { 00:14:26.189 "name": null, 00:14:26.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.189 "is_configured": false, 00:14:26.189 "data_offset": 0, 00:14:26.189 "data_size": 65536 00:14:26.189 }, 00:14:26.189 { 00:14:26.189 "name": "BaseBdev3", 00:14:26.189 "uuid": "c3b25168-bcb2-537f-800f-69c2cd1e0284", 00:14:26.189 "is_configured": true, 00:14:26.189 "data_offset": 0, 00:14:26.189 "data_size": 65536 00:14:26.189 }, 00:14:26.189 { 00:14:26.189 "name": "BaseBdev4", 00:14:26.189 "uuid": "7d39ba10-2741-5896-8905-2289473e3905", 00:14:26.189 "is_configured": true, 00:14:26.189 "data_offset": 0, 00:14:26.189 "data_size": 65536 00:14:26.189 } 00:14:26.189 ] 00:14:26.189 }' 00:14:26.189 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.189 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.448 [2024-11-20 10:37:29.868447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.448 [2024-11-20 10:37:29.868534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.448 [2024-11-20 10:37:29.868631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.448 [2024-11-20 10:37:29.868723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.448 [2024-11-20 10:37:29.868734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.448 10:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.718 10:37:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:26.718 /dev/nbd0 00:14:26.718 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.719 1+0 records in 00:14:26.719 1+0 records out 00:14:26.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354994 s, 11.5 MB/s 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.719 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.993 /dev/nbd1 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.993 1+0 records in 00:14:26.993 1+0 records out 00:14:26.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251279 s, 16.3 MB/s 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.993 10:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.252 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.510 10:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77701 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77701 ']' 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77701 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77701 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.768 killing process with pid 77701 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77701' 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77701 00:14:27.768 Received shutdown signal, test time was about 60.000000 seconds 00:14:27.768 00:14:27.768 Latency(us) 00:14:27.768 [2024-11-20T10:37:31.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.768 [2024-11-20T10:37:31.247Z] =================================================================================================================== 00:14:27.768 [2024-11-20T10:37:31.247Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:27.768 [2024-11-20 10:37:31.135102] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.768 10:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77701 00:14:28.335 [2024-11-20 10:37:31.616856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.272 10:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:29.272 00:14:29.272 real 0m17.374s 00:14:29.272 user 0m19.500s 00:14:29.272 sys 0m3.015s 00:14:29.272 10:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.272 10:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.272 ************************************ 00:14:29.272 END TEST raid_rebuild_test 00:14:29.272 ************************************ 00:14:29.531 10:37:32 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:29.531 10:37:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:29.531 10:37:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.531 10:37:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.531 ************************************ 00:14:29.531 START TEST raid_rebuild_test_sb 00:14:29.531 ************************************ 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:29.531 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78143 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78143 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78143 ']' 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.532 10:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.532 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:29.532 Zero copy mechanism will not be used. 00:14:29.532 [2024-11-20 10:37:32.875219] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:29.532 [2024-11-20 10:37:32.875361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78143 ] 00:14:29.792 [2024-11-20 10:37:33.039809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.792 [2024-11-20 10:37:33.153807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.051 [2024-11-20 10:37:33.357507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.051 [2024-11-20 10:37:33.357570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.311 BaseBdev1_malloc 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.311 [2024-11-20 10:37:33.751411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.311 [2024-11-20 10:37:33.751505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.311 [2024-11-20 10:37:33.751528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:30.311 [2024-11-20 10:37:33.751540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.311 [2024-11-20 10:37:33.753593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.311 [2024-11-20 10:37:33.753629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.311 BaseBdev1 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.311 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.572 BaseBdev2_malloc 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.572 [2024-11-20 10:37:33.807959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:30.572 [2024-11-20 10:37:33.808022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.572 [2024-11-20 10:37:33.808041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:30.572 [2024-11-20 10:37:33.808054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.572 [2024-11-20 10:37:33.810235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.572 [2024-11-20 10:37:33.810272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.572 BaseBdev2 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.572 BaseBdev3_malloc 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.572 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.572 [2024-11-20 10:37:33.873354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:30.572 [2024-11-20 10:37:33.873422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.572 [2024-11-20 10:37:33.873444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:30.572 [2024-11-20 10:37:33.873456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.572 [2024-11-20 10:37:33.875657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.573 [2024-11-20 10:37:33.875699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.573 BaseBdev3 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 BaseBdev4_malloc 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 [2024-11-20 10:37:33.931615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:30.573 [2024-11-20 10:37:33.931698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.573 [2024-11-20 10:37:33.931720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:30.573 [2024-11-20 10:37:33.931733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.573 [2024-11-20 10:37:33.934062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.573 [2024-11-20 10:37:33.934105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.573 BaseBdev4 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 spare_malloc 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 spare_delay 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 [2024-11-20 10:37:34.000809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.573 [2024-11-20 10:37:34.000874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.573 [2024-11-20 10:37:34.000894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:30.573 [2024-11-20 10:37:34.000905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.573 [2024-11-20 10:37:34.003083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.573 [2024-11-20 10:37:34.003119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.573 spare 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 [2024-11-20 10:37:34.012860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.573 [2024-11-20 10:37:34.014740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.573 [2024-11-20 10:37:34.014814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.573 [2024-11-20 10:37:34.014867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.573 [2024-11-20 10:37:34.015044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:30.573 [2024-11-20 10:37:34.015066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.573 [2024-11-20 10:37:34.015319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:30.573 [2024-11-20 10:37:34.015555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:30.573 [2024-11-20 10:37:34.015577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:30.573 [2024-11-20 10:37:34.015746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.832 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.832 "name": "raid_bdev1", 00:14:30.832 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:30.832 "strip_size_kb": 0, 00:14:30.832 "state": "online", 00:14:30.832 "raid_level": "raid1", 00:14:30.832 "superblock": true, 00:14:30.832 "num_base_bdevs": 4, 00:14:30.832 "num_base_bdevs_discovered": 4, 00:14:30.832 "num_base_bdevs_operational": 4, 00:14:30.832 "base_bdevs_list": [ 00:14:30.832 { 00:14:30.832 "name": "BaseBdev1", 00:14:30.832 "uuid": "92ff77d7-5306-572e-b5d5-1e172b3b4f0b", 00:14:30.832 "is_configured": true, 00:14:30.832 "data_offset": 2048, 00:14:30.832 "data_size": 63488 00:14:30.832 }, 00:14:30.832 { 00:14:30.832 "name": "BaseBdev2", 00:14:30.832 "uuid": "f12b0f97-2e49-583a-a9a2-57d97bb572e4", 00:14:30.832 "is_configured": true, 00:14:30.832 "data_offset": 2048, 00:14:30.832 "data_size": 63488 00:14:30.832 }, 00:14:30.832 { 00:14:30.832 "name": "BaseBdev3", 00:14:30.832 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:30.832 "is_configured": true, 00:14:30.832 "data_offset": 2048, 00:14:30.832 "data_size": 63488 00:14:30.832 }, 00:14:30.832 { 00:14:30.832 "name": "BaseBdev4", 00:14:30.832 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:30.832 "is_configured": true, 00:14:30.832 "data_offset": 2048, 00:14:30.832 "data_size": 63488 00:14:30.832 } 00:14:30.832 ] 00:14:30.832 }' 00:14:30.832 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.832 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.089 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.089 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:31.089 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.089 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.089 [2024-11-20 10:37:34.420614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.089 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.089 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.090 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:31.347 [2024-11-20 10:37:34.731697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:31.347 /dev/nbd0 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.347 1+0 records in 00:14:31.347 1+0 records out 00:14:31.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415109 s, 9.9 MB/s 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:31.347 10:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:36.622 63488+0 records in 00:14:36.622 63488+0 records out 00:14:36.622 32505856 bytes (33 MB, 31 MiB) copied, 5.24598 s, 6.2 MB/s 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.622 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.880 [2024-11-20 10:37:40.262799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.880 [2024-11-20 10:37:40.275276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.880 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.880 "name": "raid_bdev1", 00:14:36.880 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:36.880 "strip_size_kb": 0, 00:14:36.880 "state": "online", 00:14:36.880 "raid_level": "raid1", 00:14:36.880 "superblock": true, 00:14:36.880 "num_base_bdevs": 4, 00:14:36.880 "num_base_bdevs_discovered": 3, 00:14:36.880 "num_base_bdevs_operational": 3, 00:14:36.880 "base_bdevs_list": [ 00:14:36.880 { 00:14:36.880 "name": null, 00:14:36.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.880 "is_configured": false, 00:14:36.880 "data_offset": 0, 00:14:36.880 "data_size": 63488 00:14:36.880 }, 00:14:36.880 { 00:14:36.880 "name": "BaseBdev2", 00:14:36.880 "uuid": "f12b0f97-2e49-583a-a9a2-57d97bb572e4", 00:14:36.880 "is_configured": true, 00:14:36.880 "data_offset": 2048, 00:14:36.880 "data_size": 63488 00:14:36.880 }, 00:14:36.880 { 00:14:36.880 "name": "BaseBdev3", 00:14:36.880 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:36.880 "is_configured": true, 00:14:36.880 "data_offset": 2048, 00:14:36.880 "data_size": 63488 00:14:36.880 }, 00:14:36.880 { 00:14:36.881 "name": "BaseBdev4", 00:14:36.881 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:36.881 "is_configured": true, 00:14:36.881 "data_offset": 2048, 00:14:36.881 "data_size": 63488 00:14:36.881 } 00:14:36.881 ] 00:14:36.881 }' 00:14:36.881 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.881 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.448 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.448 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.448 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.448 [2024-11-20 10:37:40.706571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.448 [2024-11-20 10:37:40.724660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:37.448 10:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.448 10:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:37.448 [2024-11-20 10:37:40.726714] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.383 "name": "raid_bdev1", 00:14:38.383 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:38.383 "strip_size_kb": 0, 00:14:38.383 "state": "online", 00:14:38.383 "raid_level": "raid1", 00:14:38.383 "superblock": true, 00:14:38.383 "num_base_bdevs": 4, 00:14:38.383 "num_base_bdevs_discovered": 4, 00:14:38.383 "num_base_bdevs_operational": 4, 00:14:38.383 "process": { 00:14:38.383 "type": "rebuild", 00:14:38.383 "target": "spare", 00:14:38.383 "progress": { 00:14:38.383 "blocks": 20480, 00:14:38.383 "percent": 32 00:14:38.383 } 00:14:38.383 }, 00:14:38.383 "base_bdevs_list": [ 00:14:38.383 { 00:14:38.383 "name": "spare", 00:14:38.383 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:38.383 "is_configured": true, 00:14:38.383 "data_offset": 2048, 00:14:38.383 "data_size": 63488 00:14:38.383 }, 00:14:38.383 { 00:14:38.383 "name": "BaseBdev2", 00:14:38.383 "uuid": "f12b0f97-2e49-583a-a9a2-57d97bb572e4", 00:14:38.383 "is_configured": true, 00:14:38.383 "data_offset": 2048, 00:14:38.383 "data_size": 63488 00:14:38.383 }, 00:14:38.383 { 00:14:38.383 "name": "BaseBdev3", 00:14:38.383 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:38.383 "is_configured": true, 00:14:38.383 "data_offset": 2048, 00:14:38.383 "data_size": 63488 00:14:38.383 }, 00:14:38.383 { 00:14:38.383 "name": "BaseBdev4", 00:14:38.383 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:38.383 "is_configured": true, 00:14:38.383 "data_offset": 2048, 00:14:38.383 "data_size": 63488 00:14:38.383 } 00:14:38.383 ] 00:14:38.383 }' 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.383 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.641 [2024-11-20 10:37:41.893347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.641 [2024-11-20 10:37:41.932133] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.641 [2024-11-20 10:37:41.932210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.641 [2024-11-20 10:37:41.932229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.641 [2024-11-20 10:37:41.932239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.641 10:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.641 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.641 "name": "raid_bdev1", 00:14:38.641 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:38.641 "strip_size_kb": 0, 00:14:38.641 "state": "online", 00:14:38.641 "raid_level": "raid1", 00:14:38.641 "superblock": true, 00:14:38.641 "num_base_bdevs": 4, 00:14:38.641 "num_base_bdevs_discovered": 3, 00:14:38.641 "num_base_bdevs_operational": 3, 00:14:38.641 "base_bdevs_list": [ 00:14:38.641 { 00:14:38.642 "name": null, 00:14:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.642 "is_configured": false, 00:14:38.642 "data_offset": 0, 00:14:38.642 "data_size": 63488 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "name": "BaseBdev2", 00:14:38.642 "uuid": "f12b0f97-2e49-583a-a9a2-57d97bb572e4", 00:14:38.642 "is_configured": true, 00:14:38.642 "data_offset": 2048, 00:14:38.642 "data_size": 63488 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "name": "BaseBdev3", 00:14:38.642 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:38.642 "is_configured": true, 00:14:38.642 "data_offset": 2048, 00:14:38.642 "data_size": 63488 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "name": "BaseBdev4", 00:14:38.642 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:38.642 "is_configured": true, 00:14:38.642 "data_offset": 2048, 00:14:38.642 "data_size": 63488 00:14:38.642 } 00:14:38.642 ] 00:14:38.642 }' 00:14:38.642 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.642 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.211 "name": "raid_bdev1", 00:14:39.211 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:39.211 "strip_size_kb": 0, 00:14:39.211 "state": "online", 00:14:39.211 "raid_level": "raid1", 00:14:39.211 "superblock": true, 00:14:39.211 "num_base_bdevs": 4, 00:14:39.211 "num_base_bdevs_discovered": 3, 00:14:39.211 "num_base_bdevs_operational": 3, 00:14:39.211 "base_bdevs_list": [ 00:14:39.211 { 00:14:39.211 "name": null, 00:14:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.211 "is_configured": false, 00:14:39.211 "data_offset": 0, 00:14:39.211 "data_size": 63488 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "name": "BaseBdev2", 00:14:39.211 "uuid": "f12b0f97-2e49-583a-a9a2-57d97bb572e4", 00:14:39.211 "is_configured": true, 00:14:39.211 "data_offset": 2048, 00:14:39.211 "data_size": 63488 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "name": "BaseBdev3", 00:14:39.211 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:39.211 "is_configured": true, 00:14:39.211 "data_offset": 2048, 00:14:39.211 "data_size": 63488 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "name": "BaseBdev4", 00:14:39.211 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:39.211 "is_configured": true, 00:14:39.211 "data_offset": 2048, 00:14:39.211 "data_size": 63488 00:14:39.211 } 00:14:39.211 ] 00:14:39.211 }' 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.211 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 [2024-11-20 10:37:42.537628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.212 [2024-11-20 10:37:42.551661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.212 10:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:39.212 [2024-11-20 10:37:42.553512] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.154 "name": "raid_bdev1", 00:14:40.154 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:40.154 "strip_size_kb": 0, 00:14:40.154 "state": "online", 00:14:40.154 "raid_level": "raid1", 00:14:40.154 "superblock": true, 00:14:40.154 "num_base_bdevs": 4, 00:14:40.154 "num_base_bdevs_discovered": 4, 00:14:40.154 "num_base_bdevs_operational": 4, 00:14:40.154 "process": { 00:14:40.154 "type": "rebuild", 00:14:40.154 "target": "spare", 00:14:40.154 "progress": { 00:14:40.154 "blocks": 20480, 00:14:40.154 "percent": 32 00:14:40.154 } 00:14:40.154 }, 00:14:40.154 "base_bdevs_list": [ 00:14:40.154 { 00:14:40.154 "name": "spare", 00:14:40.154 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:40.154 "is_configured": true, 00:14:40.154 "data_offset": 2048, 00:14:40.154 "data_size": 63488 00:14:40.154 }, 00:14:40.154 { 00:14:40.154 "name": "BaseBdev2", 00:14:40.154 "uuid": "f12b0f97-2e49-583a-a9a2-57d97bb572e4", 00:14:40.154 "is_configured": true, 00:14:40.154 "data_offset": 2048, 00:14:40.154 "data_size": 63488 00:14:40.154 }, 00:14:40.154 { 00:14:40.154 "name": "BaseBdev3", 00:14:40.154 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:40.154 "is_configured": true, 00:14:40.154 "data_offset": 2048, 00:14:40.154 "data_size": 63488 00:14:40.154 }, 00:14:40.154 { 00:14:40.154 "name": "BaseBdev4", 00:14:40.154 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:40.154 "is_configured": true, 00:14:40.154 "data_offset": 2048, 00:14:40.154 "data_size": 63488 00:14:40.154 } 00:14:40.154 ] 00:14:40.154 }' 00:14:40.154 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:40.412 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.412 [2024-11-20 10:37:43.708846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.412 [2024-11-20 10:37:43.859038] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.412 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.672 "name": "raid_bdev1", 00:14:40.672 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:40.672 "strip_size_kb": 0, 00:14:40.672 "state": "online", 00:14:40.672 "raid_level": "raid1", 00:14:40.672 "superblock": true, 00:14:40.672 "num_base_bdevs": 4, 00:14:40.672 "num_base_bdevs_discovered": 3, 00:14:40.672 "num_base_bdevs_operational": 3, 00:14:40.672 "process": { 00:14:40.672 "type": "rebuild", 00:14:40.672 "target": "spare", 00:14:40.672 "progress": { 00:14:40.672 "blocks": 24576, 00:14:40.672 "percent": 38 00:14:40.672 } 00:14:40.672 }, 00:14:40.672 "base_bdevs_list": [ 00:14:40.672 { 00:14:40.672 "name": "spare", 00:14:40.672 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:40.672 "is_configured": true, 00:14:40.672 "data_offset": 2048, 00:14:40.672 "data_size": 63488 00:14:40.672 }, 00:14:40.672 { 00:14:40.672 "name": null, 00:14:40.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.672 "is_configured": false, 00:14:40.672 "data_offset": 0, 00:14:40.672 "data_size": 63488 00:14:40.672 }, 00:14:40.672 { 00:14:40.672 "name": "BaseBdev3", 00:14:40.672 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:40.672 "is_configured": true, 00:14:40.672 "data_offset": 2048, 00:14:40.672 "data_size": 63488 00:14:40.672 }, 00:14:40.672 { 00:14:40.672 "name": "BaseBdev4", 00:14:40.672 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:40.672 "is_configured": true, 00:14:40.672 "data_offset": 2048, 00:14:40.672 "data_size": 63488 00:14:40.672 } 00:14:40.672 ] 00:14:40.672 }' 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.672 10:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.672 10:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.672 "name": "raid_bdev1", 00:14:40.672 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:40.672 "strip_size_kb": 0, 00:14:40.672 "state": "online", 00:14:40.672 "raid_level": "raid1", 00:14:40.672 "superblock": true, 00:14:40.672 "num_base_bdevs": 4, 00:14:40.672 "num_base_bdevs_discovered": 3, 00:14:40.672 "num_base_bdevs_operational": 3, 00:14:40.672 "process": { 00:14:40.672 "type": "rebuild", 00:14:40.672 "target": "spare", 00:14:40.672 "progress": { 00:14:40.672 "blocks": 26624, 00:14:40.672 "percent": 41 00:14:40.672 } 00:14:40.672 }, 00:14:40.672 "base_bdevs_list": [ 00:14:40.672 { 00:14:40.672 "name": "spare", 00:14:40.672 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:40.672 "is_configured": true, 00:14:40.672 "data_offset": 2048, 00:14:40.672 "data_size": 63488 00:14:40.672 }, 00:14:40.672 { 00:14:40.672 "name": null, 00:14:40.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.672 "is_configured": false, 00:14:40.672 "data_offset": 0, 00:14:40.672 "data_size": 63488 00:14:40.672 }, 00:14:40.672 { 00:14:40.672 "name": "BaseBdev3", 00:14:40.672 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:40.672 "is_configured": true, 00:14:40.672 "data_offset": 2048, 00:14:40.672 "data_size": 63488 00:14:40.672 }, 00:14:40.672 { 00:14:40.672 "name": "BaseBdev4", 00:14:40.672 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:40.672 "is_configured": true, 00:14:40.672 "data_offset": 2048, 00:14:40.672 "data_size": 63488 00:14:40.672 } 00:14:40.672 ] 00:14:40.672 }' 00:14:40.672 10:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.672 10:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.672 10:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.672 10:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.672 10:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.052 "name": "raid_bdev1", 00:14:42.052 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:42.052 "strip_size_kb": 0, 00:14:42.052 "state": "online", 00:14:42.052 "raid_level": "raid1", 00:14:42.052 "superblock": true, 00:14:42.052 "num_base_bdevs": 4, 00:14:42.052 "num_base_bdevs_discovered": 3, 00:14:42.052 "num_base_bdevs_operational": 3, 00:14:42.052 "process": { 00:14:42.052 "type": "rebuild", 00:14:42.052 "target": "spare", 00:14:42.052 "progress": { 00:14:42.052 "blocks": 49152, 00:14:42.052 "percent": 77 00:14:42.052 } 00:14:42.052 }, 00:14:42.052 "base_bdevs_list": [ 00:14:42.052 { 00:14:42.052 "name": "spare", 00:14:42.052 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:42.052 "is_configured": true, 00:14:42.052 "data_offset": 2048, 00:14:42.052 "data_size": 63488 00:14:42.052 }, 00:14:42.052 { 00:14:42.052 "name": null, 00:14:42.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.052 "is_configured": false, 00:14:42.052 "data_offset": 0, 00:14:42.052 "data_size": 63488 00:14:42.052 }, 00:14:42.052 { 00:14:42.052 "name": "BaseBdev3", 00:14:42.052 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:42.052 "is_configured": true, 00:14:42.052 "data_offset": 2048, 00:14:42.052 "data_size": 63488 00:14:42.052 }, 00:14:42.052 { 00:14:42.052 "name": "BaseBdev4", 00:14:42.052 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:42.052 "is_configured": true, 00:14:42.052 "data_offset": 2048, 00:14:42.052 "data_size": 63488 00:14:42.052 } 00:14:42.052 ] 00:14:42.052 }' 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.052 10:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.310 [2024-11-20 10:37:45.767937] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:42.310 [2024-11-20 10:37:45.768025] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:42.310 [2024-11-20 10:37:45.768145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.877 "name": "raid_bdev1", 00:14:42.877 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:42.877 "strip_size_kb": 0, 00:14:42.877 "state": "online", 00:14:42.877 "raid_level": "raid1", 00:14:42.877 "superblock": true, 00:14:42.877 "num_base_bdevs": 4, 00:14:42.877 "num_base_bdevs_discovered": 3, 00:14:42.877 "num_base_bdevs_operational": 3, 00:14:42.877 "base_bdevs_list": [ 00:14:42.877 { 00:14:42.877 "name": "spare", 00:14:42.877 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:42.877 "is_configured": true, 00:14:42.877 "data_offset": 2048, 00:14:42.877 "data_size": 63488 00:14:42.877 }, 00:14:42.877 { 00:14:42.877 "name": null, 00:14:42.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.877 "is_configured": false, 00:14:42.877 "data_offset": 0, 00:14:42.877 "data_size": 63488 00:14:42.877 }, 00:14:42.877 { 00:14:42.877 "name": "BaseBdev3", 00:14:42.877 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:42.877 "is_configured": true, 00:14:42.877 "data_offset": 2048, 00:14:42.877 "data_size": 63488 00:14:42.877 }, 00:14:42.877 { 00:14:42.877 "name": "BaseBdev4", 00:14:42.877 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:42.877 "is_configured": true, 00:14:42.877 "data_offset": 2048, 00:14:42.877 "data_size": 63488 00:14:42.877 } 00:14:42.877 ] 00:14:42.877 }' 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:42.877 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.136 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.136 "name": "raid_bdev1", 00:14:43.136 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:43.136 "strip_size_kb": 0, 00:14:43.136 "state": "online", 00:14:43.136 "raid_level": "raid1", 00:14:43.136 "superblock": true, 00:14:43.136 "num_base_bdevs": 4, 00:14:43.136 "num_base_bdevs_discovered": 3, 00:14:43.136 "num_base_bdevs_operational": 3, 00:14:43.136 "base_bdevs_list": [ 00:14:43.136 { 00:14:43.136 "name": "spare", 00:14:43.136 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:43.136 "is_configured": true, 00:14:43.136 "data_offset": 2048, 00:14:43.136 "data_size": 63488 00:14:43.136 }, 00:14:43.136 { 00:14:43.136 "name": null, 00:14:43.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.136 "is_configured": false, 00:14:43.136 "data_offset": 0, 00:14:43.136 "data_size": 63488 00:14:43.136 }, 00:14:43.136 { 00:14:43.136 "name": "BaseBdev3", 00:14:43.136 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:43.136 "is_configured": true, 00:14:43.136 "data_offset": 2048, 00:14:43.136 "data_size": 63488 00:14:43.136 }, 00:14:43.136 { 00:14:43.136 "name": "BaseBdev4", 00:14:43.136 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:43.136 "is_configured": true, 00:14:43.136 "data_offset": 2048, 00:14:43.136 "data_size": 63488 00:14:43.137 } 00:14:43.137 ] 00:14:43.137 }' 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.137 "name": "raid_bdev1", 00:14:43.137 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:43.137 "strip_size_kb": 0, 00:14:43.137 "state": "online", 00:14:43.137 "raid_level": "raid1", 00:14:43.137 "superblock": true, 00:14:43.137 "num_base_bdevs": 4, 00:14:43.137 "num_base_bdevs_discovered": 3, 00:14:43.137 "num_base_bdevs_operational": 3, 00:14:43.137 "base_bdevs_list": [ 00:14:43.137 { 00:14:43.137 "name": "spare", 00:14:43.137 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:43.137 "is_configured": true, 00:14:43.137 "data_offset": 2048, 00:14:43.137 "data_size": 63488 00:14:43.137 }, 00:14:43.137 { 00:14:43.137 "name": null, 00:14:43.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.137 "is_configured": false, 00:14:43.137 "data_offset": 0, 00:14:43.137 "data_size": 63488 00:14:43.137 }, 00:14:43.137 { 00:14:43.137 "name": "BaseBdev3", 00:14:43.137 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:43.137 "is_configured": true, 00:14:43.137 "data_offset": 2048, 00:14:43.137 "data_size": 63488 00:14:43.137 }, 00:14:43.137 { 00:14:43.137 "name": "BaseBdev4", 00:14:43.137 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:43.137 "is_configured": true, 00:14:43.137 "data_offset": 2048, 00:14:43.137 "data_size": 63488 00:14:43.137 } 00:14:43.137 ] 00:14:43.137 }' 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.137 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.705 10:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.705 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.705 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.705 [2024-11-20 10:37:46.995774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.705 [2024-11-20 10:37:46.995811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.705 [2024-11-20 10:37:46.995919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.705 [2024-11-20 10:37:46.996003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.705 [2024-11-20 10:37:46.996014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:43.705 10:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.705 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:43.963 /dev/nbd0 00:14:43.963 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:43.963 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:43.963 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:43.963 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.964 1+0 records in 00:14:43.964 1+0 records out 00:14:43.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578848 s, 7.1 MB/s 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.964 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:44.223 /dev/nbd1 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.223 1+0 records in 00:14:44.223 1+0 records out 00:14:44.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428544 s, 9.6 MB/s 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:44.223 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.481 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.740 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.740 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.740 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.740 10:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.740 [2024-11-20 10:37:48.192933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.740 [2024-11-20 10:37:48.192993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.740 [2024-11-20 10:37:48.193031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:44.740 [2024-11-20 10:37:48.193040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.740 [2024-11-20 10:37:48.195157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.740 [2024-11-20 10:37:48.195195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.740 [2024-11-20 10:37:48.195283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:44.740 [2024-11-20 10:37:48.195328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.740 [2024-11-20 10:37:48.195497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.740 [2024-11-20 10:37:48.195582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.740 spare 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.740 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.998 [2024-11-20 10:37:48.295481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:44.998 [2024-11-20 10:37:48.295514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.998 [2024-11-20 10:37:48.295834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:44.998 [2024-11-20 10:37:48.296031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:44.998 [2024-11-20 10:37:48.296051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:44.998 [2024-11-20 10:37:48.296223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.998 "name": "raid_bdev1", 00:14:44.998 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:44.998 "strip_size_kb": 0, 00:14:44.998 "state": "online", 00:14:44.998 "raid_level": "raid1", 00:14:44.998 "superblock": true, 00:14:44.998 "num_base_bdevs": 4, 00:14:44.998 "num_base_bdevs_discovered": 3, 00:14:44.998 "num_base_bdevs_operational": 3, 00:14:44.998 "base_bdevs_list": [ 00:14:44.998 { 00:14:44.998 "name": "spare", 00:14:44.998 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:44.998 "is_configured": true, 00:14:44.998 "data_offset": 2048, 00:14:44.998 "data_size": 63488 00:14:44.998 }, 00:14:44.998 { 00:14:44.998 "name": null, 00:14:44.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.998 "is_configured": false, 00:14:44.998 "data_offset": 2048, 00:14:44.998 "data_size": 63488 00:14:44.998 }, 00:14:44.998 { 00:14:44.998 "name": "BaseBdev3", 00:14:44.998 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:44.998 "is_configured": true, 00:14:44.998 "data_offset": 2048, 00:14:44.998 "data_size": 63488 00:14:44.998 }, 00:14:44.998 { 00:14:44.998 "name": "BaseBdev4", 00:14:44.998 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:44.998 "is_configured": true, 00:14:44.998 "data_offset": 2048, 00:14:44.998 "data_size": 63488 00:14:44.998 } 00:14:44.998 ] 00:14:44.998 }' 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.998 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.564 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.564 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.564 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.564 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.565 "name": "raid_bdev1", 00:14:45.565 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:45.565 "strip_size_kb": 0, 00:14:45.565 "state": "online", 00:14:45.565 "raid_level": "raid1", 00:14:45.565 "superblock": true, 00:14:45.565 "num_base_bdevs": 4, 00:14:45.565 "num_base_bdevs_discovered": 3, 00:14:45.565 "num_base_bdevs_operational": 3, 00:14:45.565 "base_bdevs_list": [ 00:14:45.565 { 00:14:45.565 "name": "spare", 00:14:45.565 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": null, 00:14:45.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.565 "is_configured": false, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": "BaseBdev3", 00:14:45.565 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": "BaseBdev4", 00:14:45.565 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 } 00:14:45.565 ] 00:14:45.565 }' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.565 [2024-11-20 10:37:48.919788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.565 "name": "raid_bdev1", 00:14:45.565 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:45.565 "strip_size_kb": 0, 00:14:45.565 "state": "online", 00:14:45.565 "raid_level": "raid1", 00:14:45.565 "superblock": true, 00:14:45.565 "num_base_bdevs": 4, 00:14:45.565 "num_base_bdevs_discovered": 2, 00:14:45.565 "num_base_bdevs_operational": 2, 00:14:45.565 "base_bdevs_list": [ 00:14:45.565 { 00:14:45.565 "name": null, 00:14:45.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.565 "is_configured": false, 00:14:45.565 "data_offset": 0, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": null, 00:14:45.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.565 "is_configured": false, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": "BaseBdev3", 00:14:45.565 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": "BaseBdev4", 00:14:45.565 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 } 00:14:45.565 ] 00:14:45.565 }' 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.565 10:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 10:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.131 10:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.131 10:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 [2024-11-20 10:37:49.395064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.131 [2024-11-20 10:37:49.395265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:46.131 [2024-11-20 10:37:49.395281] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:46.131 [2024-11-20 10:37:49.395324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.132 [2024-11-20 10:37:49.410422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:46.132 10:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.132 10:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:46.132 [2024-11-20 10:37:49.412306] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.118 "name": "raid_bdev1", 00:14:47.118 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:47.118 "strip_size_kb": 0, 00:14:47.118 "state": "online", 00:14:47.118 "raid_level": "raid1", 00:14:47.118 "superblock": true, 00:14:47.118 "num_base_bdevs": 4, 00:14:47.118 "num_base_bdevs_discovered": 3, 00:14:47.118 "num_base_bdevs_operational": 3, 00:14:47.118 "process": { 00:14:47.118 "type": "rebuild", 00:14:47.118 "target": "spare", 00:14:47.118 "progress": { 00:14:47.118 "blocks": 20480, 00:14:47.118 "percent": 32 00:14:47.118 } 00:14:47.118 }, 00:14:47.118 "base_bdevs_list": [ 00:14:47.118 { 00:14:47.118 "name": "spare", 00:14:47.118 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:47.118 "is_configured": true, 00:14:47.118 "data_offset": 2048, 00:14:47.118 "data_size": 63488 00:14:47.118 }, 00:14:47.118 { 00:14:47.118 "name": null, 00:14:47.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.118 "is_configured": false, 00:14:47.118 "data_offset": 2048, 00:14:47.118 "data_size": 63488 00:14:47.118 }, 00:14:47.118 { 00:14:47.118 "name": "BaseBdev3", 00:14:47.118 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:47.118 "is_configured": true, 00:14:47.118 "data_offset": 2048, 00:14:47.118 "data_size": 63488 00:14:47.118 }, 00:14:47.118 { 00:14:47.118 "name": "BaseBdev4", 00:14:47.118 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:47.118 "is_configured": true, 00:14:47.118 "data_offset": 2048, 00:14:47.118 "data_size": 63488 00:14:47.118 } 00:14:47.118 ] 00:14:47.118 }' 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.118 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.118 [2024-11-20 10:37:50.563742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.378 [2024-11-20 10:37:50.617963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.378 [2024-11-20 10:37:50.618028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.378 [2024-11-20 10:37:50.618047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.378 [2024-11-20 10:37:50.618053] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.378 "name": "raid_bdev1", 00:14:47.378 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:47.378 "strip_size_kb": 0, 00:14:47.378 "state": "online", 00:14:47.378 "raid_level": "raid1", 00:14:47.378 "superblock": true, 00:14:47.378 "num_base_bdevs": 4, 00:14:47.378 "num_base_bdevs_discovered": 2, 00:14:47.378 "num_base_bdevs_operational": 2, 00:14:47.378 "base_bdevs_list": [ 00:14:47.378 { 00:14:47.378 "name": null, 00:14:47.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.378 "is_configured": false, 00:14:47.378 "data_offset": 0, 00:14:47.378 "data_size": 63488 00:14:47.378 }, 00:14:47.378 { 00:14:47.378 "name": null, 00:14:47.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.378 "is_configured": false, 00:14:47.378 "data_offset": 2048, 00:14:47.378 "data_size": 63488 00:14:47.378 }, 00:14:47.378 { 00:14:47.378 "name": "BaseBdev3", 00:14:47.378 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:47.378 "is_configured": true, 00:14:47.378 "data_offset": 2048, 00:14:47.378 "data_size": 63488 00:14:47.378 }, 00:14:47.378 { 00:14:47.378 "name": "BaseBdev4", 00:14:47.378 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:47.378 "is_configured": true, 00:14:47.378 "data_offset": 2048, 00:14:47.378 "data_size": 63488 00:14:47.378 } 00:14:47.378 ] 00:14:47.378 }' 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.378 10:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.946 10:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.946 10:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.946 10:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.946 [2024-11-20 10:37:51.155593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.946 [2024-11-20 10:37:51.155669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.946 [2024-11-20 10:37:51.155700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:47.946 [2024-11-20 10:37:51.155711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.946 [2024-11-20 10:37:51.156269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.946 [2024-11-20 10:37:51.156300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.946 [2024-11-20 10:37:51.156417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.946 [2024-11-20 10:37:51.156441] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:47.946 [2024-11-20 10:37:51.156456] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.946 [2024-11-20 10:37:51.156490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.946 [2024-11-20 10:37:51.171625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:47.946 spare 00:14:47.946 10:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.946 [2024-11-20 10:37:51.173540] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.946 10:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:48.884 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.884 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.884 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.885 "name": "raid_bdev1", 00:14:48.885 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:48.885 "strip_size_kb": 0, 00:14:48.885 "state": "online", 00:14:48.885 "raid_level": "raid1", 00:14:48.885 "superblock": true, 00:14:48.885 "num_base_bdevs": 4, 00:14:48.885 "num_base_bdevs_discovered": 3, 00:14:48.885 "num_base_bdevs_operational": 3, 00:14:48.885 "process": { 00:14:48.885 "type": "rebuild", 00:14:48.885 "target": "spare", 00:14:48.885 "progress": { 00:14:48.885 "blocks": 20480, 00:14:48.885 "percent": 32 00:14:48.885 } 00:14:48.885 }, 00:14:48.885 "base_bdevs_list": [ 00:14:48.885 { 00:14:48.885 "name": "spare", 00:14:48.885 "uuid": "955d5a1d-45c5-5803-97c0-f16fd6152acc", 00:14:48.885 "is_configured": true, 00:14:48.885 "data_offset": 2048, 00:14:48.885 "data_size": 63488 00:14:48.885 }, 00:14:48.885 { 00:14:48.885 "name": null, 00:14:48.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.885 "is_configured": false, 00:14:48.885 "data_offset": 2048, 00:14:48.885 "data_size": 63488 00:14:48.885 }, 00:14:48.885 { 00:14:48.885 "name": "BaseBdev3", 00:14:48.885 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:48.885 "is_configured": true, 00:14:48.885 "data_offset": 2048, 00:14:48.885 "data_size": 63488 00:14:48.885 }, 00:14:48.885 { 00:14:48.885 "name": "BaseBdev4", 00:14:48.885 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:48.885 "is_configured": true, 00:14:48.885 "data_offset": 2048, 00:14:48.885 "data_size": 63488 00:14:48.885 } 00:14:48.885 ] 00:14:48.885 }' 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.885 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.885 [2024-11-20 10:37:52.320990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.145 [2024-11-20 10:37:52.379242] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:49.145 [2024-11-20 10:37:52.379315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.145 [2024-11-20 10:37:52.379348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.145 [2024-11-20 10:37:52.379357] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.145 "name": "raid_bdev1", 00:14:49.145 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:49.145 "strip_size_kb": 0, 00:14:49.145 "state": "online", 00:14:49.145 "raid_level": "raid1", 00:14:49.145 "superblock": true, 00:14:49.145 "num_base_bdevs": 4, 00:14:49.145 "num_base_bdevs_discovered": 2, 00:14:49.145 "num_base_bdevs_operational": 2, 00:14:49.145 "base_bdevs_list": [ 00:14:49.145 { 00:14:49.145 "name": null, 00:14:49.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.145 "is_configured": false, 00:14:49.145 "data_offset": 0, 00:14:49.145 "data_size": 63488 00:14:49.145 }, 00:14:49.145 { 00:14:49.145 "name": null, 00:14:49.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.145 "is_configured": false, 00:14:49.145 "data_offset": 2048, 00:14:49.145 "data_size": 63488 00:14:49.145 }, 00:14:49.145 { 00:14:49.145 "name": "BaseBdev3", 00:14:49.145 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:49.145 "is_configured": true, 00:14:49.145 "data_offset": 2048, 00:14:49.145 "data_size": 63488 00:14:49.145 }, 00:14:49.145 { 00:14:49.145 "name": "BaseBdev4", 00:14:49.145 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:49.145 "is_configured": true, 00:14:49.145 "data_offset": 2048, 00:14:49.145 "data_size": 63488 00:14:49.145 } 00:14:49.145 ] 00:14:49.145 }' 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.145 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.405 10:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.664 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.664 "name": "raid_bdev1", 00:14:49.664 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:49.664 "strip_size_kb": 0, 00:14:49.664 "state": "online", 00:14:49.664 "raid_level": "raid1", 00:14:49.664 "superblock": true, 00:14:49.664 "num_base_bdevs": 4, 00:14:49.664 "num_base_bdevs_discovered": 2, 00:14:49.664 "num_base_bdevs_operational": 2, 00:14:49.664 "base_bdevs_list": [ 00:14:49.664 { 00:14:49.664 "name": null, 00:14:49.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.664 "is_configured": false, 00:14:49.664 "data_offset": 0, 00:14:49.664 "data_size": 63488 00:14:49.664 }, 00:14:49.664 { 00:14:49.664 "name": null, 00:14:49.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.664 "is_configured": false, 00:14:49.664 "data_offset": 2048, 00:14:49.664 "data_size": 63488 00:14:49.664 }, 00:14:49.664 { 00:14:49.664 "name": "BaseBdev3", 00:14:49.664 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:49.664 "is_configured": true, 00:14:49.664 "data_offset": 2048, 00:14:49.664 "data_size": 63488 00:14:49.664 }, 00:14:49.664 { 00:14:49.664 "name": "BaseBdev4", 00:14:49.664 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:49.664 "is_configured": true, 00:14:49.664 "data_offset": 2048, 00:14:49.664 "data_size": 63488 00:14:49.664 } 00:14:49.664 ] 00:14:49.664 }' 00:14:49.664 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.664 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.664 10:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.664 [2024-11-20 10:37:53.021628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.664 [2024-11-20 10:37:53.021697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.664 [2024-11-20 10:37:53.021719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:49.664 [2024-11-20 10:37:53.021729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.664 [2024-11-20 10:37:53.022237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.664 [2024-11-20 10:37:53.022273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.664 [2024-11-20 10:37:53.022385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:49.664 [2024-11-20 10:37:53.022406] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:49.664 [2024-11-20 10:37:53.022415] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:49.664 [2024-11-20 10:37:53.022441] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:49.664 BaseBdev1 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.664 10:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.605 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.866 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.866 "name": "raid_bdev1", 00:14:50.866 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:50.866 "strip_size_kb": 0, 00:14:50.866 "state": "online", 00:14:50.866 "raid_level": "raid1", 00:14:50.866 "superblock": true, 00:14:50.866 "num_base_bdevs": 4, 00:14:50.866 "num_base_bdevs_discovered": 2, 00:14:50.866 "num_base_bdevs_operational": 2, 00:14:50.866 "base_bdevs_list": [ 00:14:50.866 { 00:14:50.866 "name": null, 00:14:50.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.866 "is_configured": false, 00:14:50.866 "data_offset": 0, 00:14:50.866 "data_size": 63488 00:14:50.866 }, 00:14:50.866 { 00:14:50.866 "name": null, 00:14:50.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.866 "is_configured": false, 00:14:50.866 "data_offset": 2048, 00:14:50.866 "data_size": 63488 00:14:50.866 }, 00:14:50.866 { 00:14:50.866 "name": "BaseBdev3", 00:14:50.866 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:50.866 "is_configured": true, 00:14:50.866 "data_offset": 2048, 00:14:50.866 "data_size": 63488 00:14:50.866 }, 00:14:50.866 { 00:14:50.866 "name": "BaseBdev4", 00:14:50.866 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:50.866 "is_configured": true, 00:14:50.866 "data_offset": 2048, 00:14:50.866 "data_size": 63488 00:14:50.866 } 00:14:50.866 ] 00:14:50.866 }' 00:14:50.866 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.866 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.125 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.383 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.383 "name": "raid_bdev1", 00:14:51.383 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:51.383 "strip_size_kb": 0, 00:14:51.383 "state": "online", 00:14:51.383 "raid_level": "raid1", 00:14:51.383 "superblock": true, 00:14:51.383 "num_base_bdevs": 4, 00:14:51.383 "num_base_bdevs_discovered": 2, 00:14:51.383 "num_base_bdevs_operational": 2, 00:14:51.383 "base_bdevs_list": [ 00:14:51.383 { 00:14:51.383 "name": null, 00:14:51.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.383 "is_configured": false, 00:14:51.383 "data_offset": 0, 00:14:51.383 "data_size": 63488 00:14:51.383 }, 00:14:51.383 { 00:14:51.383 "name": null, 00:14:51.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.383 "is_configured": false, 00:14:51.383 "data_offset": 2048, 00:14:51.383 "data_size": 63488 00:14:51.383 }, 00:14:51.383 { 00:14:51.383 "name": "BaseBdev3", 00:14:51.383 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:51.383 "is_configured": true, 00:14:51.383 "data_offset": 2048, 00:14:51.383 "data_size": 63488 00:14:51.383 }, 00:14:51.383 { 00:14:51.383 "name": "BaseBdev4", 00:14:51.383 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:51.383 "is_configured": true, 00:14:51.383 "data_offset": 2048, 00:14:51.383 "data_size": 63488 00:14:51.383 } 00:14:51.383 ] 00:14:51.383 }' 00:14:51.383 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.384 [2024-11-20 10:37:54.719196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.384 [2024-11-20 10:37:54.719456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:51.384 [2024-11-20 10:37:54.719481] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:51.384 request: 00:14:51.384 { 00:14:51.384 "base_bdev": "BaseBdev1", 00:14:51.384 "raid_bdev": "raid_bdev1", 00:14:51.384 "method": "bdev_raid_add_base_bdev", 00:14:51.384 "req_id": 1 00:14:51.384 } 00:14:51.384 Got JSON-RPC error response 00:14:51.384 response: 00:14:51.384 { 00:14:51.384 "code": -22, 00:14:51.384 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:51.384 } 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:51.384 10:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.320 "name": "raid_bdev1", 00:14:52.320 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:52.320 "strip_size_kb": 0, 00:14:52.320 "state": "online", 00:14:52.320 "raid_level": "raid1", 00:14:52.320 "superblock": true, 00:14:52.320 "num_base_bdevs": 4, 00:14:52.320 "num_base_bdevs_discovered": 2, 00:14:52.320 "num_base_bdevs_operational": 2, 00:14:52.320 "base_bdevs_list": [ 00:14:52.320 { 00:14:52.320 "name": null, 00:14:52.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.320 "is_configured": false, 00:14:52.320 "data_offset": 0, 00:14:52.320 "data_size": 63488 00:14:52.320 }, 00:14:52.320 { 00:14:52.320 "name": null, 00:14:52.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.320 "is_configured": false, 00:14:52.320 "data_offset": 2048, 00:14:52.320 "data_size": 63488 00:14:52.320 }, 00:14:52.320 { 00:14:52.320 "name": "BaseBdev3", 00:14:52.320 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:52.320 "is_configured": true, 00:14:52.320 "data_offset": 2048, 00:14:52.320 "data_size": 63488 00:14:52.320 }, 00:14:52.320 { 00:14:52.320 "name": "BaseBdev4", 00:14:52.320 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:52.320 "is_configured": true, 00:14:52.320 "data_offset": 2048, 00:14:52.320 "data_size": 63488 00:14:52.320 } 00:14:52.320 ] 00:14:52.320 }' 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.320 10:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.888 "name": "raid_bdev1", 00:14:52.888 "uuid": "cc2145e8-7a24-4eb3-a10f-1ccf8c798fd5", 00:14:52.888 "strip_size_kb": 0, 00:14:52.888 "state": "online", 00:14:52.888 "raid_level": "raid1", 00:14:52.888 "superblock": true, 00:14:52.888 "num_base_bdevs": 4, 00:14:52.888 "num_base_bdevs_discovered": 2, 00:14:52.888 "num_base_bdevs_operational": 2, 00:14:52.888 "base_bdevs_list": [ 00:14:52.888 { 00:14:52.888 "name": null, 00:14:52.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.888 "is_configured": false, 00:14:52.888 "data_offset": 0, 00:14:52.888 "data_size": 63488 00:14:52.888 }, 00:14:52.888 { 00:14:52.888 "name": null, 00:14:52.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.888 "is_configured": false, 00:14:52.888 "data_offset": 2048, 00:14:52.888 "data_size": 63488 00:14:52.888 }, 00:14:52.888 { 00:14:52.888 "name": "BaseBdev3", 00:14:52.888 "uuid": "3a6ada83-8a3d-560c-9302-31b33719da58", 00:14:52.888 "is_configured": true, 00:14:52.888 "data_offset": 2048, 00:14:52.888 "data_size": 63488 00:14:52.888 }, 00:14:52.888 { 00:14:52.888 "name": "BaseBdev4", 00:14:52.888 "uuid": "5f30c780-afdf-5a72-9348-67cce08e966a", 00:14:52.888 "is_configured": true, 00:14:52.888 "data_offset": 2048, 00:14:52.888 "data_size": 63488 00:14:52.888 } 00:14:52.888 ] 00:14:52.888 }' 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.888 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78143 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78143 ']' 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78143 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78143 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.147 killing process with pid 78143 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78143' 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78143 00:14:53.147 Received shutdown signal, test time was about 60.000000 seconds 00:14:53.147 00:14:53.147 Latency(us) 00:14:53.147 [2024-11-20T10:37:56.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.147 [2024-11-20T10:37:56.626Z] =================================================================================================================== 00:14:53.147 [2024-11-20T10:37:56.626Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.147 [2024-11-20 10:37:56.415608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.147 [2024-11-20 10:37:56.415750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.147 10:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78143 00:14:53.147 [2024-11-20 10:37:56.415838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.147 [2024-11-20 10:37:56.415850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:53.716 [2024-11-20 10:37:56.946742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.654 00:14:54.654 real 0m25.276s 00:14:54.654 user 0m30.898s 00:14:54.654 sys 0m3.696s 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.654 ************************************ 00:14:54.654 END TEST raid_rebuild_test_sb 00:14:54.654 ************************************ 00:14:54.654 10:37:58 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:54.654 10:37:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:54.654 10:37:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.654 10:37:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.654 ************************************ 00:14:54.654 START TEST raid_rebuild_test_io 00:14:54.654 ************************************ 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.654 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78905 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78905 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78905 ']' 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.914 10:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.914 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:54.914 Zero copy mechanism will not be used. 00:14:54.914 [2024-11-20 10:37:58.221817] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:54.914 [2024-11-20 10:37:58.221935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78905 ] 00:14:55.174 [2024-11-20 10:37:58.397333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.174 [2024-11-20 10:37:58.516050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.433 [2024-11-20 10:37:58.723707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.433 [2024-11-20 10:37:58.723742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.698 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 BaseBdev1_malloc 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 [2024-11-20 10:37:59.111584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.699 [2024-11-20 10:37:59.111697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.699 [2024-11-20 10:37:59.111725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.699 [2024-11-20 10:37:59.111736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.699 [2024-11-20 10:37:59.113772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.699 [2024-11-20 10:37:59.113811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.699 BaseBdev1 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 BaseBdev2_malloc 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.699 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 [2024-11-20 10:37:59.166444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:55.699 [2024-11-20 10:37:59.166570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.699 [2024-11-20 10:37:59.166594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.699 [2024-11-20 10:37:59.166606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.699 [2024-11-20 10:37:59.168592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.699 [2024-11-20 10:37:59.168630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.965 BaseBdev2 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 BaseBdev3_malloc 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 [2024-11-20 10:37:59.229133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:55.966 [2024-11-20 10:37:59.229188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.966 [2024-11-20 10:37:59.229208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.966 [2024-11-20 10:37:59.229219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.966 [2024-11-20 10:37:59.231230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.966 [2024-11-20 10:37:59.231271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.966 BaseBdev3 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 BaseBdev4_malloc 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 [2024-11-20 10:37:59.284560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:55.966 [2024-11-20 10:37:59.284656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.966 [2024-11-20 10:37:59.284678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:55.966 [2024-11-20 10:37:59.284689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.966 [2024-11-20 10:37:59.286625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.966 [2024-11-20 10:37:59.286665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:55.966 BaseBdev4 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 spare_malloc 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 spare_delay 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 [2024-11-20 10:37:59.351819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.966 [2024-11-20 10:37:59.351926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.966 [2024-11-20 10:37:59.351949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:55.966 [2024-11-20 10:37:59.351959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.966 [2024-11-20 10:37:59.354027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.966 [2024-11-20 10:37:59.354065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.966 spare 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 [2024-11-20 10:37:59.363841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.966 [2024-11-20 10:37:59.365602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.966 [2024-11-20 10:37:59.365672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.966 [2024-11-20 10:37:59.365737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.966 [2024-11-20 10:37:59.365809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.966 [2024-11-20 10:37:59.365823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:55.966 [2024-11-20 10:37:59.366061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:55.966 [2024-11-20 10:37:59.366214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.966 [2024-11-20 10:37:59.366226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.966 [2024-11-20 10:37:59.366389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.966 "name": "raid_bdev1", 00:14:55.966 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:14:55.966 "strip_size_kb": 0, 00:14:55.966 "state": "online", 00:14:55.966 "raid_level": "raid1", 00:14:55.966 "superblock": false, 00:14:55.966 "num_base_bdevs": 4, 00:14:55.966 "num_base_bdevs_discovered": 4, 00:14:55.966 "num_base_bdevs_operational": 4, 00:14:55.966 "base_bdevs_list": [ 00:14:55.966 { 00:14:55.966 "name": "BaseBdev1", 00:14:55.966 "uuid": "55581716-b1ec-5d3b-829b-cb70a38a709c", 00:14:55.966 "is_configured": true, 00:14:55.966 "data_offset": 0, 00:14:55.966 "data_size": 65536 00:14:55.966 }, 00:14:55.966 { 00:14:55.966 "name": "BaseBdev2", 00:14:55.966 "uuid": "f4c1f3b9-d353-5f35-901d-7fbcb036e34c", 00:14:55.966 "is_configured": true, 00:14:55.966 "data_offset": 0, 00:14:55.966 "data_size": 65536 00:14:55.966 }, 00:14:55.966 { 00:14:55.966 "name": "BaseBdev3", 00:14:55.966 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:14:55.966 "is_configured": true, 00:14:55.966 "data_offset": 0, 00:14:55.966 "data_size": 65536 00:14:55.966 }, 00:14:55.966 { 00:14:55.966 "name": "BaseBdev4", 00:14:55.966 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:14:55.966 "is_configured": true, 00:14:55.966 "data_offset": 0, 00:14:55.966 "data_size": 65536 00:14:55.966 } 00:14:55.966 ] 00:14:55.966 }' 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.966 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.532 [2024-11-20 10:37:59.827432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.532 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.532 [2024-11-20 10:37:59.942852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.533 "name": "raid_bdev1", 00:14:56.533 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:14:56.533 "strip_size_kb": 0, 00:14:56.533 "state": "online", 00:14:56.533 "raid_level": "raid1", 00:14:56.533 "superblock": false, 00:14:56.533 "num_base_bdevs": 4, 00:14:56.533 "num_base_bdevs_discovered": 3, 00:14:56.533 "num_base_bdevs_operational": 3, 00:14:56.533 "base_bdevs_list": [ 00:14:56.533 { 00:14:56.533 "name": null, 00:14:56.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.533 "is_configured": false, 00:14:56.533 "data_offset": 0, 00:14:56.533 "data_size": 65536 00:14:56.533 }, 00:14:56.533 { 00:14:56.533 "name": "BaseBdev2", 00:14:56.533 "uuid": "f4c1f3b9-d353-5f35-901d-7fbcb036e34c", 00:14:56.533 "is_configured": true, 00:14:56.533 "data_offset": 0, 00:14:56.533 "data_size": 65536 00:14:56.533 }, 00:14:56.533 { 00:14:56.533 "name": "BaseBdev3", 00:14:56.533 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:14:56.533 "is_configured": true, 00:14:56.533 "data_offset": 0, 00:14:56.533 "data_size": 65536 00:14:56.533 }, 00:14:56.533 { 00:14:56.533 "name": "BaseBdev4", 00:14:56.533 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:14:56.533 "is_configured": true, 00:14:56.533 "data_offset": 0, 00:14:56.533 "data_size": 65536 00:14:56.533 } 00:14:56.533 ] 00:14:56.533 }' 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.533 10:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.792 [2024-11-20 10:38:00.022084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:56.792 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.792 Zero copy mechanism will not be used. 00:14:56.792 Running I/O for 60 seconds... 00:14:57.052 10:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.052 10:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.052 10:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.052 [2024-11-20 10:38:00.357066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.052 10:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.052 10:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:57.052 [2024-11-20 10:38:00.417629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:57.052 [2024-11-20 10:38:00.419611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.311 [2024-11-20 10:38:00.536449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.311 [2024-11-20 10:38:00.537112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.311 [2024-11-20 10:38:00.761307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.311 [2024-11-20 10:38:00.761763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.572 [2024-11-20 10:38:01.004788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:57.572 [2024-11-20 10:38:01.006173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:58.140 147.00 IOPS, 441.00 MiB/s [2024-11-20T10:38:01.619Z] 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.140 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.140 "name": "raid_bdev1", 00:14:58.140 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:14:58.140 "strip_size_kb": 0, 00:14:58.140 "state": "online", 00:14:58.140 "raid_level": "raid1", 00:14:58.140 "superblock": false, 00:14:58.140 "num_base_bdevs": 4, 00:14:58.140 "num_base_bdevs_discovered": 4, 00:14:58.140 "num_base_bdevs_operational": 4, 00:14:58.140 "process": { 00:14:58.140 "type": "rebuild", 00:14:58.140 "target": "spare", 00:14:58.140 "progress": { 00:14:58.140 "blocks": 12288, 00:14:58.140 "percent": 18 00:14:58.140 } 00:14:58.140 }, 00:14:58.140 "base_bdevs_list": [ 00:14:58.140 { 00:14:58.141 "name": "spare", 00:14:58.141 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:14:58.141 "is_configured": true, 00:14:58.141 "data_offset": 0, 00:14:58.141 "data_size": 65536 00:14:58.141 }, 00:14:58.141 { 00:14:58.141 "name": "BaseBdev2", 00:14:58.141 "uuid": "f4c1f3b9-d353-5f35-901d-7fbcb036e34c", 00:14:58.141 "is_configured": true, 00:14:58.141 "data_offset": 0, 00:14:58.141 "data_size": 65536 00:14:58.141 }, 00:14:58.141 { 00:14:58.141 "name": "BaseBdev3", 00:14:58.141 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:14:58.141 "is_configured": true, 00:14:58.141 "data_offset": 0, 00:14:58.141 "data_size": 65536 00:14:58.141 }, 00:14:58.141 { 00:14:58.141 "name": "BaseBdev4", 00:14:58.141 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:14:58.141 "is_configured": true, 00:14:58.141 "data_offset": 0, 00:14:58.141 "data_size": 65536 00:14:58.141 } 00:14:58.141 ] 00:14:58.141 }' 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.141 [2024-11-20 10:38:01.473022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.141 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.141 [2024-11-20 10:38:01.550240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.141 [2024-11-20 10:38:01.596566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:58.400 [2024-11-20 10:38:01.702103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.400 [2024-11-20 10:38:01.719859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.400 [2024-11-20 10:38:01.720018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.400 [2024-11-20 10:38:01.720042] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.400 [2024-11-20 10:38:01.756223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.400 "name": "raid_bdev1", 00:14:58.400 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:14:58.400 "strip_size_kb": 0, 00:14:58.400 "state": "online", 00:14:58.400 "raid_level": "raid1", 00:14:58.400 "superblock": false, 00:14:58.400 "num_base_bdevs": 4, 00:14:58.400 "num_base_bdevs_discovered": 3, 00:14:58.400 "num_base_bdevs_operational": 3, 00:14:58.400 "base_bdevs_list": [ 00:14:58.400 { 00:14:58.400 "name": null, 00:14:58.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.400 "is_configured": false, 00:14:58.400 "data_offset": 0, 00:14:58.400 "data_size": 65536 00:14:58.400 }, 00:14:58.400 { 00:14:58.400 "name": "BaseBdev2", 00:14:58.400 "uuid": "f4c1f3b9-d353-5f35-901d-7fbcb036e34c", 00:14:58.400 "is_configured": true, 00:14:58.400 "data_offset": 0, 00:14:58.400 "data_size": 65536 00:14:58.400 }, 00:14:58.400 { 00:14:58.400 "name": "BaseBdev3", 00:14:58.400 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:14:58.400 "is_configured": true, 00:14:58.400 "data_offset": 0, 00:14:58.400 "data_size": 65536 00:14:58.400 }, 00:14:58.400 { 00:14:58.400 "name": "BaseBdev4", 00:14:58.400 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:14:58.400 "is_configured": true, 00:14:58.400 "data_offset": 0, 00:14:58.400 "data_size": 65536 00:14:58.400 } 00:14:58.400 ] 00:14:58.400 }' 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.400 10:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.916 146.00 IOPS, 438.00 MiB/s [2024-11-20T10:38:02.395Z] 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.916 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.916 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.916 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.916 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.916 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.916 10:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.917 "name": "raid_bdev1", 00:14:58.917 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:14:58.917 "strip_size_kb": 0, 00:14:58.917 "state": "online", 00:14:58.917 "raid_level": "raid1", 00:14:58.917 "superblock": false, 00:14:58.917 "num_base_bdevs": 4, 00:14:58.917 "num_base_bdevs_discovered": 3, 00:14:58.917 "num_base_bdevs_operational": 3, 00:14:58.917 "base_bdevs_list": [ 00:14:58.917 { 00:14:58.917 "name": null, 00:14:58.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.917 "is_configured": false, 00:14:58.917 "data_offset": 0, 00:14:58.917 "data_size": 65536 00:14:58.917 }, 00:14:58.917 { 00:14:58.917 "name": "BaseBdev2", 00:14:58.917 "uuid": "f4c1f3b9-d353-5f35-901d-7fbcb036e34c", 00:14:58.917 "is_configured": true, 00:14:58.917 "data_offset": 0, 00:14:58.917 "data_size": 65536 00:14:58.917 }, 00:14:58.917 { 00:14:58.917 "name": "BaseBdev3", 00:14:58.917 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:14:58.917 "is_configured": true, 00:14:58.917 "data_offset": 0, 00:14:58.917 "data_size": 65536 00:14:58.917 }, 00:14:58.917 { 00:14:58.917 "name": "BaseBdev4", 00:14:58.917 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:14:58.917 "is_configured": true, 00:14:58.917 "data_offset": 0, 00:14:58.917 "data_size": 65536 00:14:58.917 } 00:14:58.917 ] 00:14:58.917 }' 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.917 10:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.917 [2024-11-20 10:38:02.347035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.175 10:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.175 10:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:59.175 [2024-11-20 10:38:02.410126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:59.175 [2024-11-20 10:38:02.412199] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.175 [2024-11-20 10:38:02.528430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.175 [2024-11-20 10:38:02.529109] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.175 [2024-11-20 10:38:02.644594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.175 [2024-11-20 10:38:02.645449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.745 [2024-11-20 10:38:02.999375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:59.745 143.33 IOPS, 430.00 MiB/s [2024-11-20T10:38:03.224Z] [2024-11-20 10:38:03.216173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:59.745 [2024-11-20 10:38:03.216575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.005 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.006 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.006 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.006 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.006 "name": "raid_bdev1", 00:15:00.006 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:00.006 "strip_size_kb": 0, 00:15:00.006 "state": "online", 00:15:00.006 "raid_level": "raid1", 00:15:00.006 "superblock": false, 00:15:00.006 "num_base_bdevs": 4, 00:15:00.006 "num_base_bdevs_discovered": 4, 00:15:00.006 "num_base_bdevs_operational": 4, 00:15:00.006 "process": { 00:15:00.006 "type": "rebuild", 00:15:00.006 "target": "spare", 00:15:00.006 "progress": { 00:15:00.006 "blocks": 10240, 00:15:00.006 "percent": 15 00:15:00.006 } 00:15:00.006 }, 00:15:00.006 "base_bdevs_list": [ 00:15:00.006 { 00:15:00.006 "name": "spare", 00:15:00.006 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:00.006 "is_configured": true, 00:15:00.006 "data_offset": 0, 00:15:00.006 "data_size": 65536 00:15:00.006 }, 00:15:00.006 { 00:15:00.006 "name": "BaseBdev2", 00:15:00.006 "uuid": "f4c1f3b9-d353-5f35-901d-7fbcb036e34c", 00:15:00.006 "is_configured": true, 00:15:00.006 "data_offset": 0, 00:15:00.006 "data_size": 65536 00:15:00.006 }, 00:15:00.006 { 00:15:00.006 "name": "BaseBdev3", 00:15:00.006 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:00.006 "is_configured": true, 00:15:00.006 "data_offset": 0, 00:15:00.006 "data_size": 65536 00:15:00.006 }, 00:15:00.006 { 00:15:00.006 "name": "BaseBdev4", 00:15:00.006 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:00.006 "is_configured": true, 00:15:00.006 "data_offset": 0, 00:15:00.006 "data_size": 65536 00:15:00.006 } 00:15:00.006 ] 00:15:00.006 }' 00:15:00.006 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.265 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.265 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.265 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.265 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.266 [2024-11-20 10:38:03.558436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.266 [2024-11-20 10:38:03.570974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:00.266 [2024-11-20 10:38:03.572571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:00.266 [2024-11-20 10:38:03.680885] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:00.266 [2024-11-20 10:38:03.681017] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.266 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.525 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.525 "name": "raid_bdev1", 00:15:00.525 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:00.525 "strip_size_kb": 0, 00:15:00.525 "state": "online", 00:15:00.525 "raid_level": "raid1", 00:15:00.525 "superblock": false, 00:15:00.525 "num_base_bdevs": 4, 00:15:00.525 "num_base_bdevs_discovered": 3, 00:15:00.525 "num_base_bdevs_operational": 3, 00:15:00.525 "process": { 00:15:00.525 "type": "rebuild", 00:15:00.525 "target": "spare", 00:15:00.525 "progress": { 00:15:00.525 "blocks": 14336, 00:15:00.525 "percent": 21 00:15:00.525 } 00:15:00.525 }, 00:15:00.525 "base_bdevs_list": [ 00:15:00.525 { 00:15:00.525 "name": "spare", 00:15:00.525 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:00.526 "is_configured": true, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "name": null, 00:15:00.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.526 "is_configured": false, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "name": "BaseBdev3", 00:15:00.526 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:00.526 "is_configured": true, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "name": "BaseBdev4", 00:15:00.526 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:00.526 "is_configured": true, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 } 00:15:00.526 ] 00:15:00.526 }' 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.526 [2024-11-20 10:38:03.800179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.526 "name": "raid_bdev1", 00:15:00.526 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:00.526 "strip_size_kb": 0, 00:15:00.526 "state": "online", 00:15:00.526 "raid_level": "raid1", 00:15:00.526 "superblock": false, 00:15:00.526 "num_base_bdevs": 4, 00:15:00.526 "num_base_bdevs_discovered": 3, 00:15:00.526 "num_base_bdevs_operational": 3, 00:15:00.526 "process": { 00:15:00.526 "type": "rebuild", 00:15:00.526 "target": "spare", 00:15:00.526 "progress": { 00:15:00.526 "blocks": 16384, 00:15:00.526 "percent": 25 00:15:00.526 } 00:15:00.526 }, 00:15:00.526 "base_bdevs_list": [ 00:15:00.526 { 00:15:00.526 "name": "spare", 00:15:00.526 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:00.526 "is_configured": true, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "name": null, 00:15:00.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.526 "is_configured": false, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "name": "BaseBdev3", 00:15:00.526 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:00.526 "is_configured": true, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "name": "BaseBdev4", 00:15:00.526 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:00.526 "is_configured": true, 00:15:00.526 "data_offset": 0, 00:15:00.526 "data_size": 65536 00:15:00.526 } 00:15:00.526 ] 00:15:00.526 }' 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.526 10:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.786 127.25 IOPS, 381.75 MiB/s [2024-11-20T10:38:04.265Z] [2024-11-20 10:38:04.027177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:00.786 [2024-11-20 10:38:04.028330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:00.786 [2024-11-20 10:38:04.250813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:01.726 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.726 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.726 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.727 10:38:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.727 10:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.727 "name": "raid_bdev1", 00:15:01.727 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:01.727 "strip_size_kb": 0, 00:15:01.727 "state": "online", 00:15:01.727 "raid_level": "raid1", 00:15:01.727 "superblock": false, 00:15:01.727 "num_base_bdevs": 4, 00:15:01.727 "num_base_bdevs_discovered": 3, 00:15:01.727 "num_base_bdevs_operational": 3, 00:15:01.727 "process": { 00:15:01.727 "type": "rebuild", 00:15:01.727 "target": "spare", 00:15:01.727 "progress": { 00:15:01.727 "blocks": 32768, 00:15:01.727 "percent": 50 00:15:01.727 } 00:15:01.727 }, 00:15:01.727 "base_bdevs_list": [ 00:15:01.727 { 00:15:01.727 "name": "spare", 00:15:01.727 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:01.727 "is_configured": true, 00:15:01.727 "data_offset": 0, 00:15:01.727 "data_size": 65536 00:15:01.727 }, 00:15:01.727 { 00:15:01.727 "name": null, 00:15:01.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.727 "is_configured": false, 00:15:01.727 "data_offset": 0, 00:15:01.727 "data_size": 65536 00:15:01.727 }, 00:15:01.727 { 00:15:01.727 "name": "BaseBdev3", 00:15:01.727 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:01.727 "is_configured": true, 00:15:01.727 "data_offset": 0, 00:15:01.727 "data_size": 65536 00:15:01.727 }, 00:15:01.727 { 00:15:01.727 "name": "BaseBdev4", 00:15:01.727 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:01.727 "is_configured": true, 00:15:01.727 "data_offset": 0, 00:15:01.727 "data_size": 65536 00:15:01.727 } 00:15:01.727 ] 00:15:01.727 }' 00:15:01.727 10:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.727 109.60 IOPS, 328.80 MiB/s [2024-11-20T10:38:05.206Z] 10:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.727 10:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.727 10:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.727 10:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.987 [2024-11-20 10:38:05.339078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:02.247 [2024-11-20 10:38:05.546930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:02.247 [2024-11-20 10:38:05.547613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:02.506 [2024-11-20 10:38:05.859612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:02.506 [2024-11-20 10:38:05.976812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:02.506 [2024-11-20 10:38:05.977456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:02.766 98.83 IOPS, 296.50 MiB/s [2024-11-20T10:38:06.245Z] 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.766 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.767 "name": "raid_bdev1", 00:15:02.767 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:02.767 "strip_size_kb": 0, 00:15:02.767 "state": "online", 00:15:02.767 "raid_level": "raid1", 00:15:02.767 "superblock": false, 00:15:02.767 "num_base_bdevs": 4, 00:15:02.767 "num_base_bdevs_discovered": 3, 00:15:02.767 "num_base_bdevs_operational": 3, 00:15:02.767 "process": { 00:15:02.767 "type": "rebuild", 00:15:02.767 "target": "spare", 00:15:02.767 "progress": { 00:15:02.767 "blocks": 47104, 00:15:02.767 "percent": 71 00:15:02.767 } 00:15:02.767 }, 00:15:02.767 "base_bdevs_list": [ 00:15:02.767 { 00:15:02.767 "name": "spare", 00:15:02.767 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:02.767 "is_configured": true, 00:15:02.767 "data_offset": 0, 00:15:02.767 "data_size": 65536 00:15:02.767 }, 00:15:02.767 { 00:15:02.767 "name": null, 00:15:02.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.767 "is_configured": false, 00:15:02.767 "data_offset": 0, 00:15:02.767 "data_size": 65536 00:15:02.767 }, 00:15:02.767 { 00:15:02.767 "name": "BaseBdev3", 00:15:02.767 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:02.767 "is_configured": true, 00:15:02.767 "data_offset": 0, 00:15:02.767 "data_size": 65536 00:15:02.767 }, 00:15:02.767 { 00:15:02.767 "name": "BaseBdev4", 00:15:02.767 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:02.767 "is_configured": true, 00:15:02.767 "data_offset": 0, 00:15:02.767 "data_size": 65536 00:15:02.767 } 00:15:02.767 ] 00:15:02.767 }' 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.767 10:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.027 [2024-11-20 10:38:06.425260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:03.856 88.57 IOPS, 265.71 MiB/s [2024-11-20T10:38:07.335Z] [2024-11-20 10:38:07.185473] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.856 "name": "raid_bdev1", 00:15:03.856 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:03.856 "strip_size_kb": 0, 00:15:03.856 "state": "online", 00:15:03.856 "raid_level": "raid1", 00:15:03.856 "superblock": false, 00:15:03.856 "num_base_bdevs": 4, 00:15:03.856 "num_base_bdevs_discovered": 3, 00:15:03.856 "num_base_bdevs_operational": 3, 00:15:03.856 "process": { 00:15:03.856 "type": "rebuild", 00:15:03.856 "target": "spare", 00:15:03.856 "progress": { 00:15:03.856 "blocks": 65536, 00:15:03.856 "percent": 100 00:15:03.856 } 00:15:03.856 }, 00:15:03.856 "base_bdevs_list": [ 00:15:03.856 { 00:15:03.856 "name": "spare", 00:15:03.856 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:03.856 "is_configured": true, 00:15:03.856 "data_offset": 0, 00:15:03.856 "data_size": 65536 00:15:03.856 }, 00:15:03.856 { 00:15:03.856 "name": null, 00:15:03.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.856 "is_configured": false, 00:15:03.856 "data_offset": 0, 00:15:03.856 "data_size": 65536 00:15:03.856 }, 00:15:03.856 { 00:15:03.856 "name": "BaseBdev3", 00:15:03.856 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:03.856 "is_configured": true, 00:15:03.856 "data_offset": 0, 00:15:03.856 "data_size": 65536 00:15:03.856 }, 00:15:03.856 { 00:15:03.856 "name": "BaseBdev4", 00:15:03.856 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:03.856 "is_configured": true, 00:15:03.856 "data_offset": 0, 00:15:03.856 "data_size": 65536 00:15:03.856 } 00:15:03.856 ] 00:15:03.856 }' 00:15:03.856 [2024-11-20 10:38:07.285344] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:03.856 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.856 [2024-11-20 10:38:07.294449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.115 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.115 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.115 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.115 10:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.942 81.75 IOPS, 245.25 MiB/s [2024-11-20T10:38:08.421Z] 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.942 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.205 "name": "raid_bdev1", 00:15:05.205 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:05.205 "strip_size_kb": 0, 00:15:05.205 "state": "online", 00:15:05.205 "raid_level": "raid1", 00:15:05.205 "superblock": false, 00:15:05.205 "num_base_bdevs": 4, 00:15:05.205 "num_base_bdevs_discovered": 3, 00:15:05.205 "num_base_bdevs_operational": 3, 00:15:05.205 "base_bdevs_list": [ 00:15:05.205 { 00:15:05.205 "name": "spare", 00:15:05.205 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:05.205 "is_configured": true, 00:15:05.205 "data_offset": 0, 00:15:05.205 "data_size": 65536 00:15:05.205 }, 00:15:05.205 { 00:15:05.205 "name": null, 00:15:05.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.205 "is_configured": false, 00:15:05.205 "data_offset": 0, 00:15:05.205 "data_size": 65536 00:15:05.205 }, 00:15:05.205 { 00:15:05.205 "name": "BaseBdev3", 00:15:05.205 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:05.205 "is_configured": true, 00:15:05.205 "data_offset": 0, 00:15:05.205 "data_size": 65536 00:15:05.205 }, 00:15:05.205 { 00:15:05.205 "name": "BaseBdev4", 00:15:05.205 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:05.205 "is_configured": true, 00:15:05.205 "data_offset": 0, 00:15:05.205 "data_size": 65536 00:15:05.205 } 00:15:05.205 ] 00:15:05.205 }' 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.205 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.206 "name": "raid_bdev1", 00:15:05.206 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:05.206 "strip_size_kb": 0, 00:15:05.206 "state": "online", 00:15:05.206 "raid_level": "raid1", 00:15:05.206 "superblock": false, 00:15:05.206 "num_base_bdevs": 4, 00:15:05.206 "num_base_bdevs_discovered": 3, 00:15:05.206 "num_base_bdevs_operational": 3, 00:15:05.206 "base_bdevs_list": [ 00:15:05.206 { 00:15:05.206 "name": "spare", 00:15:05.206 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:05.206 "is_configured": true, 00:15:05.206 "data_offset": 0, 00:15:05.206 "data_size": 65536 00:15:05.206 }, 00:15:05.206 { 00:15:05.206 "name": null, 00:15:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.206 "is_configured": false, 00:15:05.206 "data_offset": 0, 00:15:05.206 "data_size": 65536 00:15:05.206 }, 00:15:05.206 { 00:15:05.206 "name": "BaseBdev3", 00:15:05.206 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:05.206 "is_configured": true, 00:15:05.206 "data_offset": 0, 00:15:05.206 "data_size": 65536 00:15:05.206 }, 00:15:05.206 { 00:15:05.206 "name": "BaseBdev4", 00:15:05.206 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:05.206 "is_configured": true, 00:15:05.206 "data_offset": 0, 00:15:05.206 "data_size": 65536 00:15:05.206 } 00:15:05.206 ] 00:15:05.206 }' 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.206 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.485 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.485 "name": "raid_bdev1", 00:15:05.485 "uuid": "3698e0fe-937a-4824-836c-ce8237631a29", 00:15:05.485 "strip_size_kb": 0, 00:15:05.485 "state": "online", 00:15:05.485 "raid_level": "raid1", 00:15:05.485 "superblock": false, 00:15:05.485 "num_base_bdevs": 4, 00:15:05.485 "num_base_bdevs_discovered": 3, 00:15:05.485 "num_base_bdevs_operational": 3, 00:15:05.485 "base_bdevs_list": [ 00:15:05.485 { 00:15:05.485 "name": "spare", 00:15:05.485 "uuid": "23eb003a-37b0-50e1-b9a1-38876469c839", 00:15:05.485 "is_configured": true, 00:15:05.485 "data_offset": 0, 00:15:05.485 "data_size": 65536 00:15:05.485 }, 00:15:05.485 { 00:15:05.485 "name": null, 00:15:05.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.485 "is_configured": false, 00:15:05.485 "data_offset": 0, 00:15:05.485 "data_size": 65536 00:15:05.485 }, 00:15:05.485 { 00:15:05.485 "name": "BaseBdev3", 00:15:05.485 "uuid": "77ad73d0-6836-56a2-b3d1-7e281460eb3c", 00:15:05.485 "is_configured": true, 00:15:05.485 "data_offset": 0, 00:15:05.485 "data_size": 65536 00:15:05.485 }, 00:15:05.485 { 00:15:05.485 "name": "BaseBdev4", 00:15:05.485 "uuid": "aa34441a-e708-5dce-bd54-01e582004303", 00:15:05.485 "is_configured": true, 00:15:05.485 "data_offset": 0, 00:15:05.485 "data_size": 65536 00:15:05.485 } 00:15:05.485 ] 00:15:05.485 }' 00:15:05.485 10:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.485 10:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.745 77.00 IOPS, 231.00 MiB/s [2024-11-20T10:38:09.224Z] 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.745 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.745 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.745 [2024-11-20 10:38:09.122624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.745 [2024-11-20 10:38:09.122715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.745 00:15:05.745 Latency(us) 00:15:05.745 [2024-11-20T10:38:09.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.745 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:05.745 raid_bdev1 : 9.13 76.44 229.31 0.00 0.00 18371.43 332.69 111268.11 00:15:05.745 [2024-11-20T10:38:09.224Z] =================================================================================================================== 00:15:05.745 [2024-11-20T10:38:09.224Z] Total : 76.44 229.31 0.00 0.00 18371.43 332.69 111268.11 00:15:05.745 [2024-11-20 10:38:09.160671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.745 [2024-11-20 10:38:09.160787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.745 [2024-11-20 10:38:09.160905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.745 [2024-11-20 10:38:09.160964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:05.745 { 00:15:05.745 "results": [ 00:15:05.745 { 00:15:05.745 "job": "raid_bdev1", 00:15:05.745 "core_mask": "0x1", 00:15:05.745 "workload": "randrw", 00:15:05.745 "percentage": 50, 00:15:05.745 "status": "finished", 00:15:05.745 "queue_depth": 2, 00:15:05.745 "io_size": 3145728, 00:15:05.745 "runtime": 9.131916, 00:15:05.745 "iops": 76.43521907122229, 00:15:05.745 "mibps": 229.30565721366688, 00:15:05.745 "io_failed": 0, 00:15:05.745 "io_timeout": 0, 00:15:05.746 "avg_latency_us": 18371.433310394015, 00:15:05.746 "min_latency_us": 332.6882096069869, 00:15:05.746 "max_latency_us": 111268.10829694323 00:15:05.746 } 00:15:05.746 ], 00:15:05.746 "core_count": 1 00:15:05.746 } 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.746 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:06.005 /dev/nbd0 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.005 1+0 records in 00:15:06.005 1+0 records out 00:15:06.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399327 s, 10.3 MB/s 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:06.005 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.006 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:06.265 /dev/nbd1 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.265 1+0 records in 00:15:06.265 1+0 records out 00:15:06.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549789 s, 7.5 MB/s 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.265 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.525 10:38:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.785 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:07.045 /dev/nbd1 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.045 1+0 records in 00:15:07.045 1+0 records out 00:15:07.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355281 s, 11.5 MB/s 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.045 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.303 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78905 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78905 ']' 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78905 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.562 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78905 00:15:07.562 killing process with pid 78905 00:15:07.562 Received shutdown signal, test time was about 10.979211 seconds 00:15:07.562 00:15:07.562 Latency(us) 00:15:07.562 [2024-11-20T10:38:11.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.562 [2024-11-20T10:38:11.041Z] =================================================================================================================== 00:15:07.562 [2024-11-20T10:38:11.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.563 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.563 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.563 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78905' 00:15:07.563 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78905 00:15:07.563 10:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78905 00:15:07.563 [2024-11-20 10:38:10.982220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.132 [2024-11-20 10:38:11.408479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.514 ************************************ 00:15:09.514 END TEST raid_rebuild_test_io 00:15:09.514 ************************************ 00:15:09.514 00:15:09.514 real 0m14.454s 00:15:09.514 user 0m18.101s 00:15:09.514 sys 0m1.807s 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.514 10:38:12 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:09.514 10:38:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.514 10:38:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.514 10:38:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.514 ************************************ 00:15:09.514 START TEST raid_rebuild_test_sb_io 00:15:09.514 ************************************ 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79333 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79333 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79333 ']' 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.514 10:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.514 [2024-11-20 10:38:12.747224] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:15:09.514 [2024-11-20 10:38:12.747454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.514 Zero copy mechanism will not be used. 00:15:09.514 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79333 ] 00:15:09.514 [2024-11-20 10:38:12.902106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.774 [2024-11-20 10:38:13.018474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.774 [2024-11-20 10:38:13.214497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.774 [2024-11-20 10:38:13.214530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.344 BaseBdev1_malloc 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.344 [2024-11-20 10:38:13.622325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.344 [2024-11-20 10:38:13.622411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.344 [2024-11-20 10:38:13.622437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.344 [2024-11-20 10:38:13.622448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.344 [2024-11-20 10:38:13.624639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.344 [2024-11-20 10:38:13.624726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.344 BaseBdev1 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.344 BaseBdev2_malloc 00:15:10.344 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.345 [2024-11-20 10:38:13.675819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.345 [2024-11-20 10:38:13.675890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.345 [2024-11-20 10:38:13.675913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.345 [2024-11-20 10:38:13.675925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.345 [2024-11-20 10:38:13.678096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.345 [2024-11-20 10:38:13.678134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.345 BaseBdev2 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.345 BaseBdev3_malloc 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.345 [2024-11-20 10:38:13.742970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.345 [2024-11-20 10:38:13.743028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.345 [2024-11-20 10:38:13.743052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.345 [2024-11-20 10:38:13.743064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.345 [2024-11-20 10:38:13.745434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.345 [2024-11-20 10:38:13.745473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.345 BaseBdev3 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.345 BaseBdev4_malloc 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.345 [2024-11-20 10:38:13.798616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:10.345 [2024-11-20 10:38:13.798700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.345 [2024-11-20 10:38:13.798723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:10.345 [2024-11-20 10:38:13.798733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.345 [2024-11-20 10:38:13.800917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.345 [2024-11-20 10:38:13.800996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:10.345 BaseBdev4 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.345 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 spare_malloc 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 spare_delay 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 [2024-11-20 10:38:13.863098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.606 [2024-11-20 10:38:13.863204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.606 [2024-11-20 10:38:13.863230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:10.606 [2024-11-20 10:38:13.863241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.606 [2024-11-20 10:38:13.865395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.606 [2024-11-20 10:38:13.865434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.606 spare 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 [2024-11-20 10:38:13.875127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.606 [2024-11-20 10:38:13.876939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.606 [2024-11-20 10:38:13.877057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.606 [2024-11-20 10:38:13.877119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.606 [2024-11-20 10:38:13.877294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.606 [2024-11-20 10:38:13.877311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:10.606 [2024-11-20 10:38:13.877559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.606 [2024-11-20 10:38:13.877743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.606 [2024-11-20 10:38:13.877759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.606 [2024-11-20 10:38:13.877909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:10.606 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.607 "name": "raid_bdev1", 00:15:10.607 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:10.607 "strip_size_kb": 0, 00:15:10.607 "state": "online", 00:15:10.607 "raid_level": "raid1", 00:15:10.607 "superblock": true, 00:15:10.607 "num_base_bdevs": 4, 00:15:10.607 "num_base_bdevs_discovered": 4, 00:15:10.607 "num_base_bdevs_operational": 4, 00:15:10.607 "base_bdevs_list": [ 00:15:10.607 { 00:15:10.607 "name": "BaseBdev1", 00:15:10.607 "uuid": "71430b06-1654-5e67-b056-49cb73104571", 00:15:10.607 "is_configured": true, 00:15:10.607 "data_offset": 2048, 00:15:10.607 "data_size": 63488 00:15:10.607 }, 00:15:10.607 { 00:15:10.607 "name": "BaseBdev2", 00:15:10.607 "uuid": "ebee06d2-3dcb-5253-8886-3615b0301690", 00:15:10.607 "is_configured": true, 00:15:10.607 "data_offset": 2048, 00:15:10.607 "data_size": 63488 00:15:10.607 }, 00:15:10.607 { 00:15:10.607 "name": "BaseBdev3", 00:15:10.607 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:10.607 "is_configured": true, 00:15:10.607 "data_offset": 2048, 00:15:10.607 "data_size": 63488 00:15:10.607 }, 00:15:10.607 { 00:15:10.607 "name": "BaseBdev4", 00:15:10.607 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:10.607 "is_configured": true, 00:15:10.607 "data_offset": 2048, 00:15:10.607 "data_size": 63488 00:15:10.607 } 00:15:10.607 ] 00:15:10.607 }' 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.607 10:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.177 [2024-11-20 10:38:14.362660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.177 [2024-11-20 10:38:14.462098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.177 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.177 "name": "raid_bdev1", 00:15:11.177 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:11.177 "strip_size_kb": 0, 00:15:11.177 "state": "online", 00:15:11.177 "raid_level": "raid1", 00:15:11.177 "superblock": true, 00:15:11.177 "num_base_bdevs": 4, 00:15:11.177 "num_base_bdevs_discovered": 3, 00:15:11.177 "num_base_bdevs_operational": 3, 00:15:11.177 "base_bdevs_list": [ 00:15:11.177 { 00:15:11.177 "name": null, 00:15:11.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.177 "is_configured": false, 00:15:11.177 "data_offset": 0, 00:15:11.177 "data_size": 63488 00:15:11.177 }, 00:15:11.177 { 00:15:11.177 "name": "BaseBdev2", 00:15:11.177 "uuid": "ebee06d2-3dcb-5253-8886-3615b0301690", 00:15:11.177 "is_configured": true, 00:15:11.177 "data_offset": 2048, 00:15:11.177 "data_size": 63488 00:15:11.177 }, 00:15:11.177 { 00:15:11.177 "name": "BaseBdev3", 00:15:11.177 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:11.177 "is_configured": true, 00:15:11.177 "data_offset": 2048, 00:15:11.177 "data_size": 63488 00:15:11.178 }, 00:15:11.178 { 00:15:11.178 "name": "BaseBdev4", 00:15:11.178 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:11.178 "is_configured": true, 00:15:11.178 "data_offset": 2048, 00:15:11.178 "data_size": 63488 00:15:11.178 } 00:15:11.178 ] 00:15:11.178 }' 00:15:11.178 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.178 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.178 [2024-11-20 10:38:14.561922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:11.178 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:11.178 Zero copy mechanism will not be used. 00:15:11.178 Running I/O for 60 seconds... 00:15:11.436 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.436 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.436 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.436 [2024-11-20 10:38:14.908139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.695 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.695 10:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:11.695 [2024-11-20 10:38:15.004504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:11.695 [2024-11-20 10:38:15.006465] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.695 [2024-11-20 10:38:15.124034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.695 [2024-11-20 10:38:15.124681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.955 [2024-11-20 10:38:15.248537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:11.955 [2024-11-20 10:38:15.248852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.213 [2024-11-20 10:38:15.511416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:12.472 161.00 IOPS, 483.00 MiB/s [2024-11-20T10:38:15.951Z] [2024-11-20 10:38:15.723723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.732 10:38:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.732 "name": "raid_bdev1", 00:15:12.732 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:12.732 "strip_size_kb": 0, 00:15:12.732 "state": "online", 00:15:12.732 "raid_level": "raid1", 00:15:12.732 "superblock": true, 00:15:12.732 "num_base_bdevs": 4, 00:15:12.732 "num_base_bdevs_discovered": 4, 00:15:12.732 "num_base_bdevs_operational": 4, 00:15:12.732 "process": { 00:15:12.732 "type": "rebuild", 00:15:12.732 "target": "spare", 00:15:12.732 "progress": { 00:15:12.732 "blocks": 14336, 00:15:12.732 "percent": 22 00:15:12.732 } 00:15:12.732 }, 00:15:12.732 "base_bdevs_list": [ 00:15:12.732 { 00:15:12.732 "name": "spare", 00:15:12.732 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 }, 00:15:12.732 { 00:15:12.732 "name": "BaseBdev2", 00:15:12.732 "uuid": "ebee06d2-3dcb-5253-8886-3615b0301690", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 }, 00:15:12.732 { 00:15:12.732 "name": "BaseBdev3", 00:15:12.732 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 }, 00:15:12.732 { 00:15:12.732 "name": "BaseBdev4", 00:15:12.732 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 } 00:15:12.732 ] 00:15:12.732 }' 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.732 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.732 [2024-11-20 10:38:16.134456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.992 [2024-11-20 10:38:16.229923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.992 [2024-11-20 10:38:16.234559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.992 [2024-11-20 10:38:16.234610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.992 [2024-11-20 10:38:16.234627] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.992 [2024-11-20 10:38:16.253176] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.992 "name": "raid_bdev1", 00:15:12.992 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:12.992 "strip_size_kb": 0, 00:15:12.992 "state": "online", 00:15:12.992 "raid_level": "raid1", 00:15:12.992 "superblock": true, 00:15:12.992 "num_base_bdevs": 4, 00:15:12.992 "num_base_bdevs_discovered": 3, 00:15:12.992 "num_base_bdevs_operational": 3, 00:15:12.992 "base_bdevs_list": [ 00:15:12.992 { 00:15:12.992 "name": null, 00:15:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.992 "is_configured": false, 00:15:12.992 "data_offset": 0, 00:15:12.992 "data_size": 63488 00:15:12.992 }, 00:15:12.992 { 00:15:12.992 "name": "BaseBdev2", 00:15:12.992 "uuid": "ebee06d2-3dcb-5253-8886-3615b0301690", 00:15:12.992 "is_configured": true, 00:15:12.992 "data_offset": 2048, 00:15:12.992 "data_size": 63488 00:15:12.992 }, 00:15:12.992 { 00:15:12.992 "name": "BaseBdev3", 00:15:12.992 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:12.992 "is_configured": true, 00:15:12.992 "data_offset": 2048, 00:15:12.992 "data_size": 63488 00:15:12.992 }, 00:15:12.992 { 00:15:12.992 "name": "BaseBdev4", 00:15:12.992 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:12.992 "is_configured": true, 00:15:12.992 "data_offset": 2048, 00:15:12.992 "data_size": 63488 00:15:12.992 } 00:15:12.992 ] 00:15:12.992 }' 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.992 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 153.50 IOPS, 460.50 MiB/s [2024-11-20T10:38:16.987Z] 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.508 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.508 "name": "raid_bdev1", 00:15:13.508 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:13.508 "strip_size_kb": 0, 00:15:13.508 "state": "online", 00:15:13.508 "raid_level": "raid1", 00:15:13.508 "superblock": true, 00:15:13.508 "num_base_bdevs": 4, 00:15:13.508 "num_base_bdevs_discovered": 3, 00:15:13.508 "num_base_bdevs_operational": 3, 00:15:13.508 "base_bdevs_list": [ 00:15:13.508 { 00:15:13.508 "name": null, 00:15:13.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.509 "is_configured": false, 00:15:13.509 "data_offset": 0, 00:15:13.509 "data_size": 63488 00:15:13.509 }, 00:15:13.509 { 00:15:13.509 "name": "BaseBdev2", 00:15:13.509 "uuid": "ebee06d2-3dcb-5253-8886-3615b0301690", 00:15:13.509 "is_configured": true, 00:15:13.509 "data_offset": 2048, 00:15:13.509 "data_size": 63488 00:15:13.509 }, 00:15:13.509 { 00:15:13.509 "name": "BaseBdev3", 00:15:13.509 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:13.509 "is_configured": true, 00:15:13.509 "data_offset": 2048, 00:15:13.509 "data_size": 63488 00:15:13.509 }, 00:15:13.509 { 00:15:13.509 "name": "BaseBdev4", 00:15:13.509 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:13.509 "is_configured": true, 00:15:13.509 "data_offset": 2048, 00:15:13.509 "data_size": 63488 00:15:13.509 } 00:15:13.509 ] 00:15:13.509 }' 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.509 [2024-11-20 10:38:16.901697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.509 10:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.509 [2024-11-20 10:38:16.981134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:13.509 [2024-11-20 10:38:16.983087] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.768 [2024-11-20 10:38:17.105805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.768 [2024-11-20 10:38:17.107202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.027 [2024-11-20 10:38:17.337915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.027 [2024-11-20 10:38:17.338257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.546 143.00 IOPS, 429.00 MiB/s [2024-11-20T10:38:18.025Z] [2024-11-20 10:38:17.823271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.546 [2024-11-20 10:38:17.824082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.546 "name": "raid_bdev1", 00:15:14.546 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:14.546 "strip_size_kb": 0, 00:15:14.546 "state": "online", 00:15:14.546 "raid_level": "raid1", 00:15:14.546 "superblock": true, 00:15:14.546 "num_base_bdevs": 4, 00:15:14.546 "num_base_bdevs_discovered": 4, 00:15:14.546 "num_base_bdevs_operational": 4, 00:15:14.546 "process": { 00:15:14.546 "type": "rebuild", 00:15:14.546 "target": "spare", 00:15:14.546 "progress": { 00:15:14.546 "blocks": 10240, 00:15:14.546 "percent": 16 00:15:14.546 } 00:15:14.546 }, 00:15:14.546 "base_bdevs_list": [ 00:15:14.546 { 00:15:14.546 "name": "spare", 00:15:14.546 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:14.546 "is_configured": true, 00:15:14.546 "data_offset": 2048, 00:15:14.546 "data_size": 63488 00:15:14.546 }, 00:15:14.546 { 00:15:14.546 "name": "BaseBdev2", 00:15:14.546 "uuid": "ebee06d2-3dcb-5253-8886-3615b0301690", 00:15:14.546 "is_configured": true, 00:15:14.546 "data_offset": 2048, 00:15:14.546 "data_size": 63488 00:15:14.546 }, 00:15:14.546 { 00:15:14.546 "name": "BaseBdev3", 00:15:14.546 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:14.546 "is_configured": true, 00:15:14.546 "data_offset": 2048, 00:15:14.546 "data_size": 63488 00:15:14.546 }, 00:15:14.546 { 00:15:14.546 "name": "BaseBdev4", 00:15:14.546 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:14.546 "is_configured": true, 00:15:14.546 "data_offset": 2048, 00:15:14.546 "data_size": 63488 00:15:14.546 } 00:15:14.546 ] 00:15:14.546 }' 00:15:14.546 10:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:14.806 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.806 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.806 [2024-11-20 10:38:18.086704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.806 [2024-11-20 10:38:18.274308] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:14.806 [2024-11-20 10:38:18.274372] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.065 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.065 "name": "raid_bdev1", 00:15:15.065 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:15.065 "strip_size_kb": 0, 00:15:15.065 "state": "online", 00:15:15.065 "raid_level": "raid1", 00:15:15.065 "superblock": true, 00:15:15.065 "num_base_bdevs": 4, 00:15:15.065 "num_base_bdevs_discovered": 3, 00:15:15.065 "num_base_bdevs_operational": 3, 00:15:15.065 "process": { 00:15:15.065 "type": "rebuild", 00:15:15.065 "target": "spare", 00:15:15.065 "progress": { 00:15:15.065 "blocks": 14336, 00:15:15.065 "percent": 22 00:15:15.065 } 00:15:15.065 }, 00:15:15.065 "base_bdevs_list": [ 00:15:15.066 { 00:15:15.066 "name": "spare", 00:15:15.066 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:15.066 "is_configured": true, 00:15:15.066 "data_offset": 2048, 00:15:15.066 "data_size": 63488 00:15:15.066 }, 00:15:15.066 { 00:15:15.066 "name": null, 00:15:15.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.066 "is_configured": false, 00:15:15.066 "data_offset": 0, 00:15:15.066 "data_size": 63488 00:15:15.066 }, 00:15:15.066 { 00:15:15.066 "name": "BaseBdev3", 00:15:15.066 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:15.066 "is_configured": true, 00:15:15.066 "data_offset": 2048, 00:15:15.066 "data_size": 63488 00:15:15.066 }, 00:15:15.066 { 00:15:15.066 "name": "BaseBdev4", 00:15:15.066 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:15.066 "is_configured": true, 00:15:15.066 "data_offset": 2048, 00:15:15.066 "data_size": 63488 00:15:15.066 } 00:15:15.066 ] 00:15:15.066 }' 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.066 "name": "raid_bdev1", 00:15:15.066 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:15.066 "strip_size_kb": 0, 00:15:15.066 "state": "online", 00:15:15.066 "raid_level": "raid1", 00:15:15.066 "superblock": true, 00:15:15.066 "num_base_bdevs": 4, 00:15:15.066 "num_base_bdevs_discovered": 3, 00:15:15.066 "num_base_bdevs_operational": 3, 00:15:15.066 "process": { 00:15:15.066 "type": "rebuild", 00:15:15.066 "target": "spare", 00:15:15.066 "progress": { 00:15:15.066 "blocks": 16384, 00:15:15.066 "percent": 25 00:15:15.066 } 00:15:15.066 }, 00:15:15.066 "base_bdevs_list": [ 00:15:15.066 { 00:15:15.066 "name": "spare", 00:15:15.066 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:15.066 "is_configured": true, 00:15:15.066 "data_offset": 2048, 00:15:15.066 "data_size": 63488 00:15:15.066 }, 00:15:15.066 { 00:15:15.066 "name": null, 00:15:15.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.066 "is_configured": false, 00:15:15.066 "data_offset": 0, 00:15:15.066 "data_size": 63488 00:15:15.066 }, 00:15:15.066 { 00:15:15.066 "name": "BaseBdev3", 00:15:15.066 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:15.066 "is_configured": true, 00:15:15.066 "data_offset": 2048, 00:15:15.066 "data_size": 63488 00:15:15.066 }, 00:15:15.066 { 00:15:15.066 "name": "BaseBdev4", 00:15:15.066 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:15.066 "is_configured": true, 00:15:15.066 "data_offset": 2048, 00:15:15.066 "data_size": 63488 00:15:15.066 } 00:15:15.066 ] 00:15:15.066 }' 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.066 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.325 123.75 IOPS, 371.25 MiB/s [2024-11-20T10:38:18.804Z] 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.325 10:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.325 [2024-11-20 10:38:18.651625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:15.325 [2024-11-20 10:38:18.652731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:15.583 [2024-11-20 10:38:18.882937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:15.842 [2024-11-20 10:38:19.315531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:15.842 [2024-11-20 10:38:19.315910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:16.102 [2024-11-20 10:38:19.540688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:16.361 110.40 IOPS, 331.20 MiB/s [2024-11-20T10:38:19.840Z] 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.361 "name": "raid_bdev1", 00:15:16.361 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:16.361 "strip_size_kb": 0, 00:15:16.361 "state": "online", 00:15:16.361 "raid_level": "raid1", 00:15:16.361 "superblock": true, 00:15:16.361 "num_base_bdevs": 4, 00:15:16.361 "num_base_bdevs_discovered": 3, 00:15:16.361 "num_base_bdevs_operational": 3, 00:15:16.361 "process": { 00:15:16.361 "type": "rebuild", 00:15:16.361 "target": "spare", 00:15:16.361 "progress": { 00:15:16.361 "blocks": 32768, 00:15:16.361 "percent": 51 00:15:16.361 } 00:15:16.361 }, 00:15:16.361 "base_bdevs_list": [ 00:15:16.361 { 00:15:16.361 "name": "spare", 00:15:16.361 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:16.361 "is_configured": true, 00:15:16.361 "data_offset": 2048, 00:15:16.361 "data_size": 63488 00:15:16.361 }, 00:15:16.361 { 00:15:16.361 "name": null, 00:15:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.361 "is_configured": false, 00:15:16.361 "data_offset": 0, 00:15:16.361 "data_size": 63488 00:15:16.361 }, 00:15:16.361 { 00:15:16.361 "name": "BaseBdev3", 00:15:16.361 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:16.361 "is_configured": true, 00:15:16.361 "data_offset": 2048, 00:15:16.361 "data_size": 63488 00:15:16.361 }, 00:15:16.361 { 00:15:16.361 "name": "BaseBdev4", 00:15:16.361 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:16.361 "is_configured": true, 00:15:16.361 "data_offset": 2048, 00:15:16.361 "data_size": 63488 00:15:16.361 } 00:15:16.361 ] 00:15:16.361 }' 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.361 [2024-11-20 10:38:19.649592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.361 10:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.677 [2024-11-20 10:38:19.889944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:16.677 [2024-11-20 10:38:20.005028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:16.936 [2024-11-20 10:38:20.233343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:16.936 [2024-11-20 10:38:20.234025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:17.196 [2024-11-20 10:38:20.450653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:17.456 98.83 IOPS, 296.50 MiB/s [2024-11-20T10:38:20.935Z] 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.456 "name": "raid_bdev1", 00:15:17.456 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:17.456 "strip_size_kb": 0, 00:15:17.456 "state": "online", 00:15:17.456 "raid_level": "raid1", 00:15:17.456 "superblock": true, 00:15:17.456 "num_base_bdevs": 4, 00:15:17.456 "num_base_bdevs_discovered": 3, 00:15:17.456 "num_base_bdevs_operational": 3, 00:15:17.456 "process": { 00:15:17.456 "type": "rebuild", 00:15:17.456 "target": "spare", 00:15:17.456 "progress": { 00:15:17.456 "blocks": 49152, 00:15:17.456 "percent": 77 00:15:17.456 } 00:15:17.456 }, 00:15:17.456 "base_bdevs_list": [ 00:15:17.456 { 00:15:17.456 "name": "spare", 00:15:17.456 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:17.456 "is_configured": true, 00:15:17.456 "data_offset": 2048, 00:15:17.456 "data_size": 63488 00:15:17.456 }, 00:15:17.456 { 00:15:17.456 "name": null, 00:15:17.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.456 "is_configured": false, 00:15:17.456 "data_offset": 0, 00:15:17.456 "data_size": 63488 00:15:17.456 }, 00:15:17.456 { 00:15:17.456 "name": "BaseBdev3", 00:15:17.456 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:17.456 "is_configured": true, 00:15:17.456 "data_offset": 2048, 00:15:17.456 "data_size": 63488 00:15:17.456 }, 00:15:17.456 { 00:15:17.456 "name": "BaseBdev4", 00:15:17.456 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:17.456 "is_configured": true, 00:15:17.456 "data_offset": 2048, 00:15:17.456 "data_size": 63488 00:15:17.456 } 00:15:17.456 ] 00:15:17.456 }' 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.456 10:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.024 [2024-11-20 10:38:21.209642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:18.024 [2024-11-20 10:38:21.433342] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:18.284 [2024-11-20 10:38:21.533161] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:18.284 [2024-11-20 10:38:21.535981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.544 89.57 IOPS, 268.71 MiB/s [2024-11-20T10:38:22.023Z] 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.544 "name": "raid_bdev1", 00:15:18.544 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:18.544 "strip_size_kb": 0, 00:15:18.544 "state": "online", 00:15:18.544 "raid_level": "raid1", 00:15:18.544 "superblock": true, 00:15:18.544 "num_base_bdevs": 4, 00:15:18.544 "num_base_bdevs_discovered": 3, 00:15:18.544 "num_base_bdevs_operational": 3, 00:15:18.544 "base_bdevs_list": [ 00:15:18.544 { 00:15:18.544 "name": "spare", 00:15:18.544 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:18.544 "is_configured": true, 00:15:18.544 "data_offset": 2048, 00:15:18.544 "data_size": 63488 00:15:18.544 }, 00:15:18.544 { 00:15:18.544 "name": null, 00:15:18.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.544 "is_configured": false, 00:15:18.544 "data_offset": 0, 00:15:18.544 "data_size": 63488 00:15:18.544 }, 00:15:18.544 { 00:15:18.544 "name": "BaseBdev3", 00:15:18.544 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:18.544 "is_configured": true, 00:15:18.544 "data_offset": 2048, 00:15:18.544 "data_size": 63488 00:15:18.544 }, 00:15:18.544 { 00:15:18.544 "name": "BaseBdev4", 00:15:18.544 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:18.544 "is_configured": true, 00:15:18.544 "data_offset": 2048, 00:15:18.544 "data_size": 63488 00:15:18.544 } 00:15:18.544 ] 00:15:18.544 }' 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:18.544 10:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.544 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.804 "name": "raid_bdev1", 00:15:18.804 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:18.804 "strip_size_kb": 0, 00:15:18.804 "state": "online", 00:15:18.804 "raid_level": "raid1", 00:15:18.804 "superblock": true, 00:15:18.804 "num_base_bdevs": 4, 00:15:18.804 "num_base_bdevs_discovered": 3, 00:15:18.804 "num_base_bdevs_operational": 3, 00:15:18.804 "base_bdevs_list": [ 00:15:18.804 { 00:15:18.804 "name": "spare", 00:15:18.804 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:18.804 "is_configured": true, 00:15:18.804 "data_offset": 2048, 00:15:18.804 "data_size": 63488 00:15:18.804 }, 00:15:18.804 { 00:15:18.804 "name": null, 00:15:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.804 "is_configured": false, 00:15:18.804 "data_offset": 0, 00:15:18.804 "data_size": 63488 00:15:18.804 }, 00:15:18.804 { 00:15:18.804 "name": "BaseBdev3", 00:15:18.804 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:18.804 "is_configured": true, 00:15:18.804 "data_offset": 2048, 00:15:18.804 "data_size": 63488 00:15:18.804 }, 00:15:18.804 { 00:15:18.804 "name": "BaseBdev4", 00:15:18.804 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:18.804 "is_configured": true, 00:15:18.804 "data_offset": 2048, 00:15:18.804 "data_size": 63488 00:15:18.804 } 00:15:18.804 ] 00:15:18.804 }' 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.804 "name": "raid_bdev1", 00:15:18.804 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:18.804 "strip_size_kb": 0, 00:15:18.804 "state": "online", 00:15:18.804 "raid_level": "raid1", 00:15:18.804 "superblock": true, 00:15:18.804 "num_base_bdevs": 4, 00:15:18.804 "num_base_bdevs_discovered": 3, 00:15:18.804 "num_base_bdevs_operational": 3, 00:15:18.804 "base_bdevs_list": [ 00:15:18.804 { 00:15:18.804 "name": "spare", 00:15:18.804 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:18.804 "is_configured": true, 00:15:18.804 "data_offset": 2048, 00:15:18.804 "data_size": 63488 00:15:18.804 }, 00:15:18.804 { 00:15:18.804 "name": null, 00:15:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.804 "is_configured": false, 00:15:18.804 "data_offset": 0, 00:15:18.804 "data_size": 63488 00:15:18.804 }, 00:15:18.804 { 00:15:18.804 "name": "BaseBdev3", 00:15:18.804 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:18.804 "is_configured": true, 00:15:18.804 "data_offset": 2048, 00:15:18.804 "data_size": 63488 00:15:18.804 }, 00:15:18.804 { 00:15:18.804 "name": "BaseBdev4", 00:15:18.804 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:18.804 "is_configured": true, 00:15:18.804 "data_offset": 2048, 00:15:18.804 "data_size": 63488 00:15:18.804 } 00:15:18.804 ] 00:15:18.804 }' 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.804 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.374 82.00 IOPS, 246.00 MiB/s [2024-11-20T10:38:22.853Z] 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.374 [2024-11-20 10:38:22.648320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.374 [2024-11-20 10:38:22.648433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.374 00:15:19.374 Latency(us) 00:15:19.374 [2024-11-20T10:38:22.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.374 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:19.374 raid_bdev1 : 8.19 80.93 242.80 0.00 0.00 16260.82 332.69 118136.51 00:15:19.374 [2024-11-20T10:38:22.853Z] =================================================================================================================== 00:15:19.374 [2024-11-20T10:38:22.853Z] Total : 80.93 242.80 0.00 0.00 16260.82 332.69 118136.51 00:15:19.374 [2024-11-20 10:38:22.762566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.374 [2024-11-20 10:38:22.762664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.374 [2024-11-20 10:38:22.762801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.374 [2024-11-20 10:38:22.762850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:19.374 { 00:15:19.374 "results": [ 00:15:19.374 { 00:15:19.374 "job": "raid_bdev1", 00:15:19.374 "core_mask": "0x1", 00:15:19.374 "workload": "randrw", 00:15:19.374 "percentage": 50, 00:15:19.374 "status": "finished", 00:15:19.374 "queue_depth": 2, 00:15:19.374 "io_size": 3145728, 00:15:19.374 "runtime": 8.191772, 00:15:19.374 "iops": 80.93486976932464, 00:15:19.374 "mibps": 242.80460930797392, 00:15:19.374 "io_failed": 0, 00:15:19.374 "io_timeout": 0, 00:15:19.374 "avg_latency_us": 16260.823510969722, 00:15:19.374 "min_latency_us": 332.6882096069869, 00:15:19.374 "max_latency_us": 118136.51004366812 00:15:19.374 } 00:15:19.374 ], 00:15:19.374 "core_count": 1 00:15:19.374 } 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.374 10:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:19.634 /dev/nbd0 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.634 1+0 records in 00:15:19.634 1+0 records out 00:15:19.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371847 s, 11.0 MB/s 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.634 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:19.894 /dev/nbd1 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.894 1+0 records in 00:15:19.894 1+0 records out 00:15:19.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00167129 s, 2.5 MB/s 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.894 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.153 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.413 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:20.672 /dev/nbd1 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.672 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.673 10:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.673 1+0 records in 00:15:20.673 1+0 records out 00:15:20.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262514 s, 15.6 MB/s 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.673 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.932 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.192 [2024-11-20 10:38:24.575478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:21.192 [2024-11-20 10:38:24.575582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.192 [2024-11-20 10:38:24.575635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:21.192 [2024-11-20 10:38:24.575681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.192 [2024-11-20 10:38:24.577959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.192 [2024-11-20 10:38:24.578047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:21.192 [2024-11-20 10:38:24.578170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:21.192 [2024-11-20 10:38:24.578262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.192 [2024-11-20 10:38:24.578464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.192 [2024-11-20 10:38:24.578619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:21.192 spare 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.192 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.452 [2024-11-20 10:38:24.678576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:21.452 [2024-11-20 10:38:24.678662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.452 [2024-11-20 10:38:24.678999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:21.452 [2024-11-20 10:38:24.679217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:21.452 [2024-11-20 10:38:24.679265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:21.452 [2024-11-20 10:38:24.679520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.452 "name": "raid_bdev1", 00:15:21.452 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:21.452 "strip_size_kb": 0, 00:15:21.452 "state": "online", 00:15:21.452 "raid_level": "raid1", 00:15:21.452 "superblock": true, 00:15:21.452 "num_base_bdevs": 4, 00:15:21.452 "num_base_bdevs_discovered": 3, 00:15:21.452 "num_base_bdevs_operational": 3, 00:15:21.452 "base_bdevs_list": [ 00:15:21.452 { 00:15:21.452 "name": "spare", 00:15:21.452 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:21.452 "is_configured": true, 00:15:21.452 "data_offset": 2048, 00:15:21.452 "data_size": 63488 00:15:21.452 }, 00:15:21.452 { 00:15:21.452 "name": null, 00:15:21.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.452 "is_configured": false, 00:15:21.452 "data_offset": 2048, 00:15:21.452 "data_size": 63488 00:15:21.452 }, 00:15:21.452 { 00:15:21.452 "name": "BaseBdev3", 00:15:21.452 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:21.452 "is_configured": true, 00:15:21.452 "data_offset": 2048, 00:15:21.452 "data_size": 63488 00:15:21.452 }, 00:15:21.452 { 00:15:21.452 "name": "BaseBdev4", 00:15:21.452 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:21.452 "is_configured": true, 00:15:21.452 "data_offset": 2048, 00:15:21.452 "data_size": 63488 00:15:21.452 } 00:15:21.452 ] 00:15:21.452 }' 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.452 10:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.721 "name": "raid_bdev1", 00:15:21.721 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:21.721 "strip_size_kb": 0, 00:15:21.721 "state": "online", 00:15:21.721 "raid_level": "raid1", 00:15:21.721 "superblock": true, 00:15:21.721 "num_base_bdevs": 4, 00:15:21.721 "num_base_bdevs_discovered": 3, 00:15:21.721 "num_base_bdevs_operational": 3, 00:15:21.721 "base_bdevs_list": [ 00:15:21.721 { 00:15:21.721 "name": "spare", 00:15:21.721 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:21.721 "is_configured": true, 00:15:21.721 "data_offset": 2048, 00:15:21.721 "data_size": 63488 00:15:21.721 }, 00:15:21.721 { 00:15:21.721 "name": null, 00:15:21.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.721 "is_configured": false, 00:15:21.721 "data_offset": 2048, 00:15:21.721 "data_size": 63488 00:15:21.721 }, 00:15:21.721 { 00:15:21.721 "name": "BaseBdev3", 00:15:21.721 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:21.721 "is_configured": true, 00:15:21.721 "data_offset": 2048, 00:15:21.721 "data_size": 63488 00:15:21.721 }, 00:15:21.721 { 00:15:21.721 "name": "BaseBdev4", 00:15:21.721 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:21.721 "is_configured": true, 00:15:21.721 "data_offset": 2048, 00:15:21.721 "data_size": 63488 00:15:21.721 } 00:15:21.721 ] 00:15:21.721 }' 00:15:21.721 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.007 [2024-11-20 10:38:25.282504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.007 "name": "raid_bdev1", 00:15:22.007 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:22.007 "strip_size_kb": 0, 00:15:22.007 "state": "online", 00:15:22.007 "raid_level": "raid1", 00:15:22.007 "superblock": true, 00:15:22.007 "num_base_bdevs": 4, 00:15:22.007 "num_base_bdevs_discovered": 2, 00:15:22.007 "num_base_bdevs_operational": 2, 00:15:22.007 "base_bdevs_list": [ 00:15:22.007 { 00:15:22.007 "name": null, 00:15:22.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.007 "is_configured": false, 00:15:22.007 "data_offset": 0, 00:15:22.007 "data_size": 63488 00:15:22.007 }, 00:15:22.007 { 00:15:22.007 "name": null, 00:15:22.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.007 "is_configured": false, 00:15:22.007 "data_offset": 2048, 00:15:22.007 "data_size": 63488 00:15:22.007 }, 00:15:22.007 { 00:15:22.007 "name": "BaseBdev3", 00:15:22.007 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:22.007 "is_configured": true, 00:15:22.007 "data_offset": 2048, 00:15:22.007 "data_size": 63488 00:15:22.007 }, 00:15:22.007 { 00:15:22.007 "name": "BaseBdev4", 00:15:22.007 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:22.007 "is_configured": true, 00:15:22.007 "data_offset": 2048, 00:15:22.007 "data_size": 63488 00:15:22.007 } 00:15:22.007 ] 00:15:22.007 }' 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.007 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.266 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.266 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.266 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.266 [2024-11-20 10:38:25.717854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.266 [2024-11-20 10:38:25.718118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:22.266 [2024-11-20 10:38:25.718183] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:22.266 [2024-11-20 10:38:25.718253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.266 [2024-11-20 10:38:25.734876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:22.266 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.266 10:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:22.266 [2024-11-20 10:38:25.736978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.645 "name": "raid_bdev1", 00:15:23.645 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:23.645 "strip_size_kb": 0, 00:15:23.645 "state": "online", 00:15:23.645 "raid_level": "raid1", 00:15:23.645 "superblock": true, 00:15:23.645 "num_base_bdevs": 4, 00:15:23.645 "num_base_bdevs_discovered": 3, 00:15:23.645 "num_base_bdevs_operational": 3, 00:15:23.645 "process": { 00:15:23.645 "type": "rebuild", 00:15:23.645 "target": "spare", 00:15:23.645 "progress": { 00:15:23.645 "blocks": 20480, 00:15:23.645 "percent": 32 00:15:23.645 } 00:15:23.645 }, 00:15:23.645 "base_bdevs_list": [ 00:15:23.645 { 00:15:23.645 "name": "spare", 00:15:23.645 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:23.645 "is_configured": true, 00:15:23.645 "data_offset": 2048, 00:15:23.645 "data_size": 63488 00:15:23.645 }, 00:15:23.645 { 00:15:23.645 "name": null, 00:15:23.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.645 "is_configured": false, 00:15:23.645 "data_offset": 2048, 00:15:23.645 "data_size": 63488 00:15:23.645 }, 00:15:23.645 { 00:15:23.645 "name": "BaseBdev3", 00:15:23.645 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:23.645 "is_configured": true, 00:15:23.645 "data_offset": 2048, 00:15:23.645 "data_size": 63488 00:15:23.645 }, 00:15:23.645 { 00:15:23.645 "name": "BaseBdev4", 00:15:23.645 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:23.645 "is_configured": true, 00:15:23.645 "data_offset": 2048, 00:15:23.645 "data_size": 63488 00:15:23.645 } 00:15:23.645 ] 00:15:23.645 }' 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.645 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.645 [2024-11-20 10:38:26.904448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.645 [2024-11-20 10:38:26.942698] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.645 [2024-11-20 10:38:26.942815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.645 [2024-11-20 10:38:26.942840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.645 [2024-11-20 10:38:26.942848] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.646 10:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.646 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.646 "name": "raid_bdev1", 00:15:23.646 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:23.646 "strip_size_kb": 0, 00:15:23.646 "state": "online", 00:15:23.646 "raid_level": "raid1", 00:15:23.646 "superblock": true, 00:15:23.646 "num_base_bdevs": 4, 00:15:23.646 "num_base_bdevs_discovered": 2, 00:15:23.646 "num_base_bdevs_operational": 2, 00:15:23.646 "base_bdevs_list": [ 00:15:23.646 { 00:15:23.646 "name": null, 00:15:23.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.646 "is_configured": false, 00:15:23.646 "data_offset": 0, 00:15:23.646 "data_size": 63488 00:15:23.646 }, 00:15:23.646 { 00:15:23.646 "name": null, 00:15:23.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.646 "is_configured": false, 00:15:23.646 "data_offset": 2048, 00:15:23.646 "data_size": 63488 00:15:23.646 }, 00:15:23.646 { 00:15:23.646 "name": "BaseBdev3", 00:15:23.646 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:23.646 "is_configured": true, 00:15:23.646 "data_offset": 2048, 00:15:23.646 "data_size": 63488 00:15:23.646 }, 00:15:23.646 { 00:15:23.646 "name": "BaseBdev4", 00:15:23.646 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:23.646 "is_configured": true, 00:15:23.646 "data_offset": 2048, 00:15:23.646 "data_size": 63488 00:15:23.646 } 00:15:23.646 ] 00:15:23.646 }' 00:15:23.646 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.646 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.212 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.212 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.212 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.212 [2024-11-20 10:38:27.471987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.213 [2024-11-20 10:38:27.472114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.213 [2024-11-20 10:38:27.472179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:24.213 [2024-11-20 10:38:27.472231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.213 [2024-11-20 10:38:27.472733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.213 [2024-11-20 10:38:27.472795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.213 [2024-11-20 10:38:27.472920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:24.213 [2024-11-20 10:38:27.472959] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:24.213 [2024-11-20 10:38:27.473000] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.213 [2024-11-20 10:38:27.473076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.213 [2024-11-20 10:38:27.487745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:24.213 spare 00:15:24.213 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.213 [2024-11-20 10:38:27.489556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.213 10:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.150 "name": "raid_bdev1", 00:15:25.150 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:25.150 "strip_size_kb": 0, 00:15:25.150 "state": "online", 00:15:25.150 "raid_level": "raid1", 00:15:25.150 "superblock": true, 00:15:25.150 "num_base_bdevs": 4, 00:15:25.150 "num_base_bdevs_discovered": 3, 00:15:25.150 "num_base_bdevs_operational": 3, 00:15:25.150 "process": { 00:15:25.150 "type": "rebuild", 00:15:25.150 "target": "spare", 00:15:25.150 "progress": { 00:15:25.150 "blocks": 20480, 00:15:25.150 "percent": 32 00:15:25.150 } 00:15:25.150 }, 00:15:25.150 "base_bdevs_list": [ 00:15:25.150 { 00:15:25.150 "name": "spare", 00:15:25.150 "uuid": "cd6d3991-92a4-5fd8-a272-fb3b298a65a3", 00:15:25.150 "is_configured": true, 00:15:25.150 "data_offset": 2048, 00:15:25.150 "data_size": 63488 00:15:25.150 }, 00:15:25.150 { 00:15:25.150 "name": null, 00:15:25.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.150 "is_configured": false, 00:15:25.150 "data_offset": 2048, 00:15:25.150 "data_size": 63488 00:15:25.150 }, 00:15:25.150 { 00:15:25.150 "name": "BaseBdev3", 00:15:25.150 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:25.150 "is_configured": true, 00:15:25.150 "data_offset": 2048, 00:15:25.150 "data_size": 63488 00:15:25.150 }, 00:15:25.150 { 00:15:25.150 "name": "BaseBdev4", 00:15:25.150 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:25.150 "is_configured": true, 00:15:25.150 "data_offset": 2048, 00:15:25.150 "data_size": 63488 00:15:25.150 } 00:15:25.150 ] 00:15:25.150 }' 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.150 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.409 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.409 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.409 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.409 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.409 [2024-11-20 10:38:28.665431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.410 [2024-11-20 10:38:28.694960] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.410 [2024-11-20 10:38:28.695055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.410 [2024-11-20 10:38:28.695070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.410 [2024-11-20 10:38:28.695079] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.410 "name": "raid_bdev1", 00:15:25.410 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:25.410 "strip_size_kb": 0, 00:15:25.410 "state": "online", 00:15:25.410 "raid_level": "raid1", 00:15:25.410 "superblock": true, 00:15:25.410 "num_base_bdevs": 4, 00:15:25.410 "num_base_bdevs_discovered": 2, 00:15:25.410 "num_base_bdevs_operational": 2, 00:15:25.410 "base_bdevs_list": [ 00:15:25.410 { 00:15:25.410 "name": null, 00:15:25.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.410 "is_configured": false, 00:15:25.410 "data_offset": 0, 00:15:25.410 "data_size": 63488 00:15:25.410 }, 00:15:25.410 { 00:15:25.410 "name": null, 00:15:25.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.410 "is_configured": false, 00:15:25.410 "data_offset": 2048, 00:15:25.410 "data_size": 63488 00:15:25.410 }, 00:15:25.410 { 00:15:25.410 "name": "BaseBdev3", 00:15:25.410 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:25.410 "is_configured": true, 00:15:25.410 "data_offset": 2048, 00:15:25.410 "data_size": 63488 00:15:25.410 }, 00:15:25.410 { 00:15:25.410 "name": "BaseBdev4", 00:15:25.410 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:25.410 "is_configured": true, 00:15:25.410 "data_offset": 2048, 00:15:25.410 "data_size": 63488 00:15:25.410 } 00:15:25.410 ] 00:15:25.410 }' 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.410 10:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.977 "name": "raid_bdev1", 00:15:25.977 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:25.977 "strip_size_kb": 0, 00:15:25.977 "state": "online", 00:15:25.977 "raid_level": "raid1", 00:15:25.977 "superblock": true, 00:15:25.977 "num_base_bdevs": 4, 00:15:25.977 "num_base_bdevs_discovered": 2, 00:15:25.977 "num_base_bdevs_operational": 2, 00:15:25.977 "base_bdevs_list": [ 00:15:25.977 { 00:15:25.977 "name": null, 00:15:25.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.977 "is_configured": false, 00:15:25.977 "data_offset": 0, 00:15:25.977 "data_size": 63488 00:15:25.977 }, 00:15:25.977 { 00:15:25.977 "name": null, 00:15:25.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.977 "is_configured": false, 00:15:25.977 "data_offset": 2048, 00:15:25.977 "data_size": 63488 00:15:25.977 }, 00:15:25.977 { 00:15:25.977 "name": "BaseBdev3", 00:15:25.977 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:25.977 "is_configured": true, 00:15:25.977 "data_offset": 2048, 00:15:25.977 "data_size": 63488 00:15:25.977 }, 00:15:25.977 { 00:15:25.977 "name": "BaseBdev4", 00:15:25.977 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:25.977 "is_configured": true, 00:15:25.977 "data_offset": 2048, 00:15:25.977 "data_size": 63488 00:15:25.977 } 00:15:25.977 ] 00:15:25.977 }' 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 [2024-11-20 10:38:29.344206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.977 [2024-11-20 10:38:29.344270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.977 [2024-11-20 10:38:29.344291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:25.977 [2024-11-20 10:38:29.344302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.977 [2024-11-20 10:38:29.344773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.977 [2024-11-20 10:38:29.344853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.977 [2024-11-20 10:38:29.344943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:25.977 [2024-11-20 10:38:29.344960] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:25.977 [2024-11-20 10:38:29.344968] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:25.977 [2024-11-20 10:38:29.344983] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:25.977 BaseBdev1 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.977 10:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.912 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.222 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.222 "name": "raid_bdev1", 00:15:27.222 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:27.222 "strip_size_kb": 0, 00:15:27.222 "state": "online", 00:15:27.222 "raid_level": "raid1", 00:15:27.222 "superblock": true, 00:15:27.222 "num_base_bdevs": 4, 00:15:27.222 "num_base_bdevs_discovered": 2, 00:15:27.222 "num_base_bdevs_operational": 2, 00:15:27.222 "base_bdevs_list": [ 00:15:27.222 { 00:15:27.222 "name": null, 00:15:27.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.222 "is_configured": false, 00:15:27.222 "data_offset": 0, 00:15:27.222 "data_size": 63488 00:15:27.222 }, 00:15:27.222 { 00:15:27.222 "name": null, 00:15:27.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.222 "is_configured": false, 00:15:27.222 "data_offset": 2048, 00:15:27.222 "data_size": 63488 00:15:27.222 }, 00:15:27.222 { 00:15:27.222 "name": "BaseBdev3", 00:15:27.222 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:27.222 "is_configured": true, 00:15:27.222 "data_offset": 2048, 00:15:27.222 "data_size": 63488 00:15:27.222 }, 00:15:27.222 { 00:15:27.222 "name": "BaseBdev4", 00:15:27.222 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:27.222 "is_configured": true, 00:15:27.222 "data_offset": 2048, 00:15:27.222 "data_size": 63488 00:15:27.222 } 00:15:27.222 ] 00:15:27.222 }' 00:15:27.222 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.222 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.481 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.481 "name": "raid_bdev1", 00:15:27.482 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:27.482 "strip_size_kb": 0, 00:15:27.482 "state": "online", 00:15:27.482 "raid_level": "raid1", 00:15:27.482 "superblock": true, 00:15:27.482 "num_base_bdevs": 4, 00:15:27.482 "num_base_bdevs_discovered": 2, 00:15:27.482 "num_base_bdevs_operational": 2, 00:15:27.482 "base_bdevs_list": [ 00:15:27.482 { 00:15:27.482 "name": null, 00:15:27.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.482 "is_configured": false, 00:15:27.482 "data_offset": 0, 00:15:27.482 "data_size": 63488 00:15:27.482 }, 00:15:27.482 { 00:15:27.482 "name": null, 00:15:27.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.482 "is_configured": false, 00:15:27.482 "data_offset": 2048, 00:15:27.482 "data_size": 63488 00:15:27.482 }, 00:15:27.482 { 00:15:27.482 "name": "BaseBdev3", 00:15:27.482 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:27.482 "is_configured": true, 00:15:27.482 "data_offset": 2048, 00:15:27.482 "data_size": 63488 00:15:27.482 }, 00:15:27.482 { 00:15:27.482 "name": "BaseBdev4", 00:15:27.482 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:27.482 "is_configured": true, 00:15:27.482 "data_offset": 2048, 00:15:27.482 "data_size": 63488 00:15:27.482 } 00:15:27.482 ] 00:15:27.482 }' 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.482 [2024-11-20 10:38:30.950051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.482 [2024-11-20 10:38:30.950230] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:27.482 [2024-11-20 10:38:30.950243] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.482 request: 00:15:27.482 { 00:15:27.482 "base_bdev": "BaseBdev1", 00:15:27.482 "raid_bdev": "raid_bdev1", 00:15:27.482 "method": "bdev_raid_add_base_bdev", 00:15:27.482 "req_id": 1 00:15:27.482 } 00:15:27.482 Got JSON-RPC error response 00:15:27.482 response: 00:15:27.482 { 00:15:27.482 "code": -22, 00:15:27.482 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:27.482 } 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:27.482 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.741 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.741 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.741 10:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.679 10:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.679 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.679 "name": "raid_bdev1", 00:15:28.679 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:28.679 "strip_size_kb": 0, 00:15:28.679 "state": "online", 00:15:28.679 "raid_level": "raid1", 00:15:28.679 "superblock": true, 00:15:28.679 "num_base_bdevs": 4, 00:15:28.679 "num_base_bdevs_discovered": 2, 00:15:28.679 "num_base_bdevs_operational": 2, 00:15:28.679 "base_bdevs_list": [ 00:15:28.679 { 00:15:28.679 "name": null, 00:15:28.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.679 "is_configured": false, 00:15:28.679 "data_offset": 0, 00:15:28.679 "data_size": 63488 00:15:28.679 }, 00:15:28.679 { 00:15:28.679 "name": null, 00:15:28.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.679 "is_configured": false, 00:15:28.679 "data_offset": 2048, 00:15:28.679 "data_size": 63488 00:15:28.679 }, 00:15:28.679 { 00:15:28.679 "name": "BaseBdev3", 00:15:28.679 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:28.679 "is_configured": true, 00:15:28.679 "data_offset": 2048, 00:15:28.679 "data_size": 63488 00:15:28.679 }, 00:15:28.679 { 00:15:28.679 "name": "BaseBdev4", 00:15:28.679 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:28.679 "is_configured": true, 00:15:28.679 "data_offset": 2048, 00:15:28.679 "data_size": 63488 00:15:28.679 } 00:15:28.679 ] 00:15:28.679 }' 00:15:28.679 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.679 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.247 "name": "raid_bdev1", 00:15:29.247 "uuid": "97d7e768-54f9-47da-bf93-8754bc4a0ab2", 00:15:29.247 "strip_size_kb": 0, 00:15:29.247 "state": "online", 00:15:29.247 "raid_level": "raid1", 00:15:29.247 "superblock": true, 00:15:29.247 "num_base_bdevs": 4, 00:15:29.247 "num_base_bdevs_discovered": 2, 00:15:29.247 "num_base_bdevs_operational": 2, 00:15:29.247 "base_bdevs_list": [ 00:15:29.247 { 00:15:29.247 "name": null, 00:15:29.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.247 "is_configured": false, 00:15:29.247 "data_offset": 0, 00:15:29.247 "data_size": 63488 00:15:29.247 }, 00:15:29.247 { 00:15:29.247 "name": null, 00:15:29.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.247 "is_configured": false, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 }, 00:15:29.247 { 00:15:29.247 "name": "BaseBdev3", 00:15:29.247 "uuid": "5aaaa3b2-f9db-5c1d-b355-9b91e378baa9", 00:15:29.247 "is_configured": true, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 }, 00:15:29.247 { 00:15:29.247 "name": "BaseBdev4", 00:15:29.247 "uuid": "c3e27bce-6374-526c-a69d-25ab011beb6f", 00:15:29.247 "is_configured": true, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 } 00:15:29.247 ] 00:15:29.247 }' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79333 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79333 ']' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79333 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79333 00:15:29.247 killing process with pid 79333 00:15:29.247 Received shutdown signal, test time was about 18.090364 seconds 00:15:29.247 00:15:29.247 Latency(us) 00:15:29.247 [2024-11-20T10:38:32.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.247 [2024-11-20T10:38:32.726Z] =================================================================================================================== 00:15:29.247 [2024-11-20T10:38:32.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79333' 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79333 00:15:29.247 [2024-11-20 10:38:32.619627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.247 [2024-11-20 10:38:32.619761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.247 10:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79333 00:15:29.247 [2024-11-20 10:38:32.619834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.247 [2024-11-20 10:38:32.619844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:29.814 [2024-11-20 10:38:33.039372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.753 10:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:30.753 00:15:30.753 real 0m21.549s 00:15:30.753 user 0m28.381s 00:15:30.753 sys 0m2.537s 00:15:30.753 10:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.753 10:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.753 ************************************ 00:15:30.753 END TEST raid_rebuild_test_sb_io 00:15:30.753 ************************************ 00:15:31.011 10:38:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:31.011 10:38:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:31.011 10:38:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.011 10:38:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.011 10:38:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.011 ************************************ 00:15:31.011 START TEST raid5f_state_function_test 00:15:31.011 ************************************ 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80056 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80056' 00:15:31.011 Process raid pid: 80056 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80056 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80056 ']' 00:15:31.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.011 10:38:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.011 [2024-11-20 10:38:34.371171] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:15:31.011 [2024-11-20 10:38:34.371292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.269 [2024-11-20 10:38:34.543336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.269 [2024-11-20 10:38:34.669481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.529 [2024-11-20 10:38:34.890138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.529 [2024-11-20 10:38:34.890256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.789 [2024-11-20 10:38:35.239820] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.789 [2024-11-20 10:38:35.239936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.789 [2024-11-20 10:38:35.239958] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.789 [2024-11-20 10:38:35.239969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.789 [2024-11-20 10:38:35.239975] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.789 [2024-11-20 10:38:35.239984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.789 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.790 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.790 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.790 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.049 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.049 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.049 "name": "Existed_Raid", 00:15:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.049 "strip_size_kb": 64, 00:15:32.049 "state": "configuring", 00:15:32.049 "raid_level": "raid5f", 00:15:32.049 "superblock": false, 00:15:32.049 "num_base_bdevs": 3, 00:15:32.049 "num_base_bdevs_discovered": 0, 00:15:32.049 "num_base_bdevs_operational": 3, 00:15:32.049 "base_bdevs_list": [ 00:15:32.049 { 00:15:32.049 "name": "BaseBdev1", 00:15:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.049 "is_configured": false, 00:15:32.049 "data_offset": 0, 00:15:32.049 "data_size": 0 00:15:32.049 }, 00:15:32.049 { 00:15:32.049 "name": "BaseBdev2", 00:15:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.049 "is_configured": false, 00:15:32.049 "data_offset": 0, 00:15:32.049 "data_size": 0 00:15:32.049 }, 00:15:32.049 { 00:15:32.049 "name": "BaseBdev3", 00:15:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.049 "is_configured": false, 00:15:32.049 "data_offset": 0, 00:15:32.049 "data_size": 0 00:15:32.049 } 00:15:32.049 ] 00:15:32.049 }' 00:15:32.049 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.049 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.309 [2024-11-20 10:38:35.726944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.309 [2024-11-20 10:38:35.726982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.309 [2024-11-20 10:38:35.738906] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.309 [2024-11-20 10:38:35.738992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.309 [2024-11-20 10:38:35.739027] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.309 [2024-11-20 10:38:35.739066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.309 [2024-11-20 10:38:35.739090] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.309 [2024-11-20 10:38:35.739118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.309 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.569 [2024-11-20 10:38:35.785518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.569 BaseBdev1 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.569 [ 00:15:32.569 { 00:15:32.569 "name": "BaseBdev1", 00:15:32.569 "aliases": [ 00:15:32.569 "8302e475-761b-47c8-a30e-0f4248682f13" 00:15:32.569 ], 00:15:32.569 "product_name": "Malloc disk", 00:15:32.569 "block_size": 512, 00:15:32.569 "num_blocks": 65536, 00:15:32.569 "uuid": "8302e475-761b-47c8-a30e-0f4248682f13", 00:15:32.569 "assigned_rate_limits": { 00:15:32.569 "rw_ios_per_sec": 0, 00:15:32.569 "rw_mbytes_per_sec": 0, 00:15:32.569 "r_mbytes_per_sec": 0, 00:15:32.569 "w_mbytes_per_sec": 0 00:15:32.569 }, 00:15:32.569 "claimed": true, 00:15:32.569 "claim_type": "exclusive_write", 00:15:32.569 "zoned": false, 00:15:32.569 "supported_io_types": { 00:15:32.569 "read": true, 00:15:32.569 "write": true, 00:15:32.569 "unmap": true, 00:15:32.569 "flush": true, 00:15:32.569 "reset": true, 00:15:32.569 "nvme_admin": false, 00:15:32.569 "nvme_io": false, 00:15:32.569 "nvme_io_md": false, 00:15:32.569 "write_zeroes": true, 00:15:32.569 "zcopy": true, 00:15:32.569 "get_zone_info": false, 00:15:32.569 "zone_management": false, 00:15:32.569 "zone_append": false, 00:15:32.569 "compare": false, 00:15:32.569 "compare_and_write": false, 00:15:32.569 "abort": true, 00:15:32.569 "seek_hole": false, 00:15:32.569 "seek_data": false, 00:15:32.569 "copy": true, 00:15:32.569 "nvme_iov_md": false 00:15:32.569 }, 00:15:32.569 "memory_domains": [ 00:15:32.569 { 00:15:32.569 "dma_device_id": "system", 00:15:32.569 "dma_device_type": 1 00:15:32.569 }, 00:15:32.569 { 00:15:32.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.569 "dma_device_type": 2 00:15:32.569 } 00:15:32.569 ], 00:15:32.569 "driver_specific": {} 00:15:32.569 } 00:15:32.569 ] 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.569 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.570 "name": "Existed_Raid", 00:15:32.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.570 "strip_size_kb": 64, 00:15:32.570 "state": "configuring", 00:15:32.570 "raid_level": "raid5f", 00:15:32.570 "superblock": false, 00:15:32.570 "num_base_bdevs": 3, 00:15:32.570 "num_base_bdevs_discovered": 1, 00:15:32.570 "num_base_bdevs_operational": 3, 00:15:32.570 "base_bdevs_list": [ 00:15:32.570 { 00:15:32.570 "name": "BaseBdev1", 00:15:32.570 "uuid": "8302e475-761b-47c8-a30e-0f4248682f13", 00:15:32.570 "is_configured": true, 00:15:32.570 "data_offset": 0, 00:15:32.570 "data_size": 65536 00:15:32.570 }, 00:15:32.570 { 00:15:32.570 "name": "BaseBdev2", 00:15:32.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.570 "is_configured": false, 00:15:32.570 "data_offset": 0, 00:15:32.570 "data_size": 0 00:15:32.570 }, 00:15:32.570 { 00:15:32.570 "name": "BaseBdev3", 00:15:32.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.570 "is_configured": false, 00:15:32.570 "data_offset": 0, 00:15:32.570 "data_size": 0 00:15:32.570 } 00:15:32.570 ] 00:15:32.570 }' 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.570 10:38:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.829 [2024-11-20 10:38:36.272731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.829 [2024-11-20 10:38:36.272786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.829 [2024-11-20 10:38:36.284755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.829 [2024-11-20 10:38:36.286551] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.829 [2024-11-20 10:38:36.286642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.829 [2024-11-20 10:38:36.286657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.829 [2024-11-20 10:38:36.286666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.829 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.088 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.088 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.088 "name": "Existed_Raid", 00:15:33.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.088 "strip_size_kb": 64, 00:15:33.088 "state": "configuring", 00:15:33.088 "raid_level": "raid5f", 00:15:33.088 "superblock": false, 00:15:33.088 "num_base_bdevs": 3, 00:15:33.088 "num_base_bdevs_discovered": 1, 00:15:33.088 "num_base_bdevs_operational": 3, 00:15:33.088 "base_bdevs_list": [ 00:15:33.088 { 00:15:33.088 "name": "BaseBdev1", 00:15:33.088 "uuid": "8302e475-761b-47c8-a30e-0f4248682f13", 00:15:33.088 "is_configured": true, 00:15:33.088 "data_offset": 0, 00:15:33.088 "data_size": 65536 00:15:33.088 }, 00:15:33.088 { 00:15:33.088 "name": "BaseBdev2", 00:15:33.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.088 "is_configured": false, 00:15:33.088 "data_offset": 0, 00:15:33.088 "data_size": 0 00:15:33.088 }, 00:15:33.088 { 00:15:33.088 "name": "BaseBdev3", 00:15:33.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.088 "is_configured": false, 00:15:33.088 "data_offset": 0, 00:15:33.088 "data_size": 0 00:15:33.088 } 00:15:33.088 ] 00:15:33.088 }' 00:15:33.088 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.088 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 [2024-11-20 10:38:36.754709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.347 BaseBdev2 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.347 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 [ 00:15:33.347 { 00:15:33.347 "name": "BaseBdev2", 00:15:33.347 "aliases": [ 00:15:33.347 "9b41f0fd-3588-4876-80d2-4517f0e0a51d" 00:15:33.347 ], 00:15:33.347 "product_name": "Malloc disk", 00:15:33.347 "block_size": 512, 00:15:33.347 "num_blocks": 65536, 00:15:33.347 "uuid": "9b41f0fd-3588-4876-80d2-4517f0e0a51d", 00:15:33.348 "assigned_rate_limits": { 00:15:33.348 "rw_ios_per_sec": 0, 00:15:33.348 "rw_mbytes_per_sec": 0, 00:15:33.348 "r_mbytes_per_sec": 0, 00:15:33.348 "w_mbytes_per_sec": 0 00:15:33.348 }, 00:15:33.348 "claimed": true, 00:15:33.348 "claim_type": "exclusive_write", 00:15:33.348 "zoned": false, 00:15:33.348 "supported_io_types": { 00:15:33.348 "read": true, 00:15:33.348 "write": true, 00:15:33.348 "unmap": true, 00:15:33.348 "flush": true, 00:15:33.348 "reset": true, 00:15:33.348 "nvme_admin": false, 00:15:33.348 "nvme_io": false, 00:15:33.348 "nvme_io_md": false, 00:15:33.348 "write_zeroes": true, 00:15:33.348 "zcopy": true, 00:15:33.348 "get_zone_info": false, 00:15:33.348 "zone_management": false, 00:15:33.348 "zone_append": false, 00:15:33.348 "compare": false, 00:15:33.348 "compare_and_write": false, 00:15:33.348 "abort": true, 00:15:33.348 "seek_hole": false, 00:15:33.348 "seek_data": false, 00:15:33.348 "copy": true, 00:15:33.348 "nvme_iov_md": false 00:15:33.348 }, 00:15:33.348 "memory_domains": [ 00:15:33.348 { 00:15:33.348 "dma_device_id": "system", 00:15:33.348 "dma_device_type": 1 00:15:33.348 }, 00:15:33.348 { 00:15:33.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.348 "dma_device_type": 2 00:15:33.348 } 00:15:33.348 ], 00:15:33.348 "driver_specific": {} 00:15:33.348 } 00:15:33.348 ] 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.348 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.609 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.609 "name": "Existed_Raid", 00:15:33.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.609 "strip_size_kb": 64, 00:15:33.609 "state": "configuring", 00:15:33.609 "raid_level": "raid5f", 00:15:33.609 "superblock": false, 00:15:33.609 "num_base_bdevs": 3, 00:15:33.609 "num_base_bdevs_discovered": 2, 00:15:33.609 "num_base_bdevs_operational": 3, 00:15:33.609 "base_bdevs_list": [ 00:15:33.609 { 00:15:33.609 "name": "BaseBdev1", 00:15:33.609 "uuid": "8302e475-761b-47c8-a30e-0f4248682f13", 00:15:33.609 "is_configured": true, 00:15:33.609 "data_offset": 0, 00:15:33.609 "data_size": 65536 00:15:33.609 }, 00:15:33.609 { 00:15:33.609 "name": "BaseBdev2", 00:15:33.609 "uuid": "9b41f0fd-3588-4876-80d2-4517f0e0a51d", 00:15:33.609 "is_configured": true, 00:15:33.609 "data_offset": 0, 00:15:33.609 "data_size": 65536 00:15:33.609 }, 00:15:33.609 { 00:15:33.609 "name": "BaseBdev3", 00:15:33.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.609 "is_configured": false, 00:15:33.609 "data_offset": 0, 00:15:33.609 "data_size": 0 00:15:33.609 } 00:15:33.609 ] 00:15:33.609 }' 00:15:33.609 10:38:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.609 10:38:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.867 [2024-11-20 10:38:37.320977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.867 [2024-11-20 10:38:37.321049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:33.867 [2024-11-20 10:38:37.321063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:33.867 [2024-11-20 10:38:37.321321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:33.867 [2024-11-20 10:38:37.326547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:33.867 [2024-11-20 10:38:37.326572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:33.867 [2024-11-20 10:38:37.326836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.867 BaseBdev3 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.867 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.127 [ 00:15:34.127 { 00:15:34.127 "name": "BaseBdev3", 00:15:34.127 "aliases": [ 00:15:34.127 "2b731efa-7751-479b-b683-c55a0ebe4409" 00:15:34.127 ], 00:15:34.127 "product_name": "Malloc disk", 00:15:34.127 "block_size": 512, 00:15:34.127 "num_blocks": 65536, 00:15:34.127 "uuid": "2b731efa-7751-479b-b683-c55a0ebe4409", 00:15:34.127 "assigned_rate_limits": { 00:15:34.127 "rw_ios_per_sec": 0, 00:15:34.127 "rw_mbytes_per_sec": 0, 00:15:34.127 "r_mbytes_per_sec": 0, 00:15:34.127 "w_mbytes_per_sec": 0 00:15:34.127 }, 00:15:34.127 "claimed": true, 00:15:34.127 "claim_type": "exclusive_write", 00:15:34.127 "zoned": false, 00:15:34.127 "supported_io_types": { 00:15:34.127 "read": true, 00:15:34.127 "write": true, 00:15:34.127 "unmap": true, 00:15:34.127 "flush": true, 00:15:34.127 "reset": true, 00:15:34.127 "nvme_admin": false, 00:15:34.127 "nvme_io": false, 00:15:34.127 "nvme_io_md": false, 00:15:34.127 "write_zeroes": true, 00:15:34.127 "zcopy": true, 00:15:34.127 "get_zone_info": false, 00:15:34.127 "zone_management": false, 00:15:34.127 "zone_append": false, 00:15:34.127 "compare": false, 00:15:34.127 "compare_and_write": false, 00:15:34.127 "abort": true, 00:15:34.127 "seek_hole": false, 00:15:34.127 "seek_data": false, 00:15:34.127 "copy": true, 00:15:34.127 "nvme_iov_md": false 00:15:34.127 }, 00:15:34.127 "memory_domains": [ 00:15:34.127 { 00:15:34.127 "dma_device_id": "system", 00:15:34.127 "dma_device_type": 1 00:15:34.127 }, 00:15:34.127 { 00:15:34.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.127 "dma_device_type": 2 00:15:34.127 } 00:15:34.127 ], 00:15:34.127 "driver_specific": {} 00:15:34.127 } 00:15:34.127 ] 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.127 "name": "Existed_Raid", 00:15:34.127 "uuid": "27caad26-d52b-4126-8c61-dd506c2b3539", 00:15:34.127 "strip_size_kb": 64, 00:15:34.127 "state": "online", 00:15:34.127 "raid_level": "raid5f", 00:15:34.127 "superblock": false, 00:15:34.127 "num_base_bdevs": 3, 00:15:34.127 "num_base_bdevs_discovered": 3, 00:15:34.127 "num_base_bdevs_operational": 3, 00:15:34.127 "base_bdevs_list": [ 00:15:34.127 { 00:15:34.127 "name": "BaseBdev1", 00:15:34.127 "uuid": "8302e475-761b-47c8-a30e-0f4248682f13", 00:15:34.127 "is_configured": true, 00:15:34.127 "data_offset": 0, 00:15:34.127 "data_size": 65536 00:15:34.127 }, 00:15:34.127 { 00:15:34.127 "name": "BaseBdev2", 00:15:34.127 "uuid": "9b41f0fd-3588-4876-80d2-4517f0e0a51d", 00:15:34.127 "is_configured": true, 00:15:34.127 "data_offset": 0, 00:15:34.127 "data_size": 65536 00:15:34.127 }, 00:15:34.127 { 00:15:34.127 "name": "BaseBdev3", 00:15:34.127 "uuid": "2b731efa-7751-479b-b683-c55a0ebe4409", 00:15:34.127 "is_configured": true, 00:15:34.127 "data_offset": 0, 00:15:34.127 "data_size": 65536 00:15:34.127 } 00:15:34.127 ] 00:15:34.127 }' 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.127 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.386 [2024-11-20 10:38:37.784736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.386 "name": "Existed_Raid", 00:15:34.386 "aliases": [ 00:15:34.386 "27caad26-d52b-4126-8c61-dd506c2b3539" 00:15:34.386 ], 00:15:34.386 "product_name": "Raid Volume", 00:15:34.386 "block_size": 512, 00:15:34.386 "num_blocks": 131072, 00:15:34.386 "uuid": "27caad26-d52b-4126-8c61-dd506c2b3539", 00:15:34.386 "assigned_rate_limits": { 00:15:34.386 "rw_ios_per_sec": 0, 00:15:34.386 "rw_mbytes_per_sec": 0, 00:15:34.386 "r_mbytes_per_sec": 0, 00:15:34.386 "w_mbytes_per_sec": 0 00:15:34.386 }, 00:15:34.386 "claimed": false, 00:15:34.386 "zoned": false, 00:15:34.386 "supported_io_types": { 00:15:34.386 "read": true, 00:15:34.386 "write": true, 00:15:34.386 "unmap": false, 00:15:34.386 "flush": false, 00:15:34.386 "reset": true, 00:15:34.386 "nvme_admin": false, 00:15:34.386 "nvme_io": false, 00:15:34.386 "nvme_io_md": false, 00:15:34.386 "write_zeroes": true, 00:15:34.386 "zcopy": false, 00:15:34.386 "get_zone_info": false, 00:15:34.386 "zone_management": false, 00:15:34.386 "zone_append": false, 00:15:34.386 "compare": false, 00:15:34.386 "compare_and_write": false, 00:15:34.386 "abort": false, 00:15:34.386 "seek_hole": false, 00:15:34.386 "seek_data": false, 00:15:34.386 "copy": false, 00:15:34.386 "nvme_iov_md": false 00:15:34.386 }, 00:15:34.386 "driver_specific": { 00:15:34.386 "raid": { 00:15:34.386 "uuid": "27caad26-d52b-4126-8c61-dd506c2b3539", 00:15:34.386 "strip_size_kb": 64, 00:15:34.386 "state": "online", 00:15:34.386 "raid_level": "raid5f", 00:15:34.386 "superblock": false, 00:15:34.386 "num_base_bdevs": 3, 00:15:34.386 "num_base_bdevs_discovered": 3, 00:15:34.386 "num_base_bdevs_operational": 3, 00:15:34.386 "base_bdevs_list": [ 00:15:34.386 { 00:15:34.386 "name": "BaseBdev1", 00:15:34.386 "uuid": "8302e475-761b-47c8-a30e-0f4248682f13", 00:15:34.386 "is_configured": true, 00:15:34.386 "data_offset": 0, 00:15:34.386 "data_size": 65536 00:15:34.386 }, 00:15:34.386 { 00:15:34.386 "name": "BaseBdev2", 00:15:34.386 "uuid": "9b41f0fd-3588-4876-80d2-4517f0e0a51d", 00:15:34.386 "is_configured": true, 00:15:34.386 "data_offset": 0, 00:15:34.386 "data_size": 65536 00:15:34.386 }, 00:15:34.386 { 00:15:34.386 "name": "BaseBdev3", 00:15:34.386 "uuid": "2b731efa-7751-479b-b683-c55a0ebe4409", 00:15:34.386 "is_configured": true, 00:15:34.386 "data_offset": 0, 00:15:34.386 "data_size": 65536 00:15:34.386 } 00:15:34.386 ] 00:15:34.386 } 00:15:34.386 } 00:15:34.386 }' 00:15:34.386 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:34.645 BaseBdev2 00:15:34.645 BaseBdev3' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.645 10:38:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.645 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.645 [2024-11-20 10:38:38.040137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.904 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.904 "name": "Existed_Raid", 00:15:34.904 "uuid": "27caad26-d52b-4126-8c61-dd506c2b3539", 00:15:34.904 "strip_size_kb": 64, 00:15:34.904 "state": "online", 00:15:34.904 "raid_level": "raid5f", 00:15:34.904 "superblock": false, 00:15:34.904 "num_base_bdevs": 3, 00:15:34.904 "num_base_bdevs_discovered": 2, 00:15:34.904 "num_base_bdevs_operational": 2, 00:15:34.904 "base_bdevs_list": [ 00:15:34.904 { 00:15:34.904 "name": null, 00:15:34.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.904 "is_configured": false, 00:15:34.904 "data_offset": 0, 00:15:34.904 "data_size": 65536 00:15:34.904 }, 00:15:34.904 { 00:15:34.904 "name": "BaseBdev2", 00:15:34.904 "uuid": "9b41f0fd-3588-4876-80d2-4517f0e0a51d", 00:15:34.904 "is_configured": true, 00:15:34.904 "data_offset": 0, 00:15:34.904 "data_size": 65536 00:15:34.905 }, 00:15:34.905 { 00:15:34.905 "name": "BaseBdev3", 00:15:34.905 "uuid": "2b731efa-7751-479b-b683-c55a0ebe4409", 00:15:34.905 "is_configured": true, 00:15:34.905 "data_offset": 0, 00:15:34.905 "data_size": 65536 00:15:34.905 } 00:15:34.905 ] 00:15:34.905 }' 00:15:34.905 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.905 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.163 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.163 [2024-11-20 10:38:38.594218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.163 [2024-11-20 10:38:38.594332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.551 [2024-11-20 10:38:38.687948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.551 [2024-11-20 10:38:38.743883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.551 [2024-11-20 10:38:38.743937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.551 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.552 BaseBdev2 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.552 [ 00:15:35.552 { 00:15:35.552 "name": "BaseBdev2", 00:15:35.552 "aliases": [ 00:15:35.552 "e09e996c-ffc2-4f5c-ba4a-83567dc70643" 00:15:35.552 ], 00:15:35.552 "product_name": "Malloc disk", 00:15:35.552 "block_size": 512, 00:15:35.552 "num_blocks": 65536, 00:15:35.552 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:35.552 "assigned_rate_limits": { 00:15:35.552 "rw_ios_per_sec": 0, 00:15:35.552 "rw_mbytes_per_sec": 0, 00:15:35.552 "r_mbytes_per_sec": 0, 00:15:35.552 "w_mbytes_per_sec": 0 00:15:35.552 }, 00:15:35.552 "claimed": false, 00:15:35.552 "zoned": false, 00:15:35.552 "supported_io_types": { 00:15:35.552 "read": true, 00:15:35.552 "write": true, 00:15:35.552 "unmap": true, 00:15:35.552 "flush": true, 00:15:35.552 "reset": true, 00:15:35.552 "nvme_admin": false, 00:15:35.552 "nvme_io": false, 00:15:35.552 "nvme_io_md": false, 00:15:35.552 "write_zeroes": true, 00:15:35.552 "zcopy": true, 00:15:35.552 "get_zone_info": false, 00:15:35.552 "zone_management": false, 00:15:35.552 "zone_append": false, 00:15:35.552 "compare": false, 00:15:35.552 "compare_and_write": false, 00:15:35.552 "abort": true, 00:15:35.552 "seek_hole": false, 00:15:35.552 "seek_data": false, 00:15:35.552 "copy": true, 00:15:35.552 "nvme_iov_md": false 00:15:35.552 }, 00:15:35.552 "memory_domains": [ 00:15:35.552 { 00:15:35.552 "dma_device_id": "system", 00:15:35.552 "dma_device_type": 1 00:15:35.552 }, 00:15:35.552 { 00:15:35.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.552 "dma_device_type": 2 00:15:35.552 } 00:15:35.552 ], 00:15:35.552 "driver_specific": {} 00:15:35.552 } 00:15:35.552 ] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.552 BaseBdev3 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.552 10:38:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.552 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.811 [ 00:15:35.811 { 00:15:35.811 "name": "BaseBdev3", 00:15:35.811 "aliases": [ 00:15:35.811 "6da6266d-c421-4789-aa43-231e6126cdad" 00:15:35.811 ], 00:15:35.811 "product_name": "Malloc disk", 00:15:35.811 "block_size": 512, 00:15:35.811 "num_blocks": 65536, 00:15:35.811 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:35.811 "assigned_rate_limits": { 00:15:35.811 "rw_ios_per_sec": 0, 00:15:35.811 "rw_mbytes_per_sec": 0, 00:15:35.811 "r_mbytes_per_sec": 0, 00:15:35.811 "w_mbytes_per_sec": 0 00:15:35.811 }, 00:15:35.811 "claimed": false, 00:15:35.811 "zoned": false, 00:15:35.811 "supported_io_types": { 00:15:35.811 "read": true, 00:15:35.811 "write": true, 00:15:35.811 "unmap": true, 00:15:35.811 "flush": true, 00:15:35.811 "reset": true, 00:15:35.811 "nvme_admin": false, 00:15:35.811 "nvme_io": false, 00:15:35.811 "nvme_io_md": false, 00:15:35.811 "write_zeroes": true, 00:15:35.811 "zcopy": true, 00:15:35.811 "get_zone_info": false, 00:15:35.811 "zone_management": false, 00:15:35.811 "zone_append": false, 00:15:35.811 "compare": false, 00:15:35.811 "compare_and_write": false, 00:15:35.811 "abort": true, 00:15:35.811 "seek_hole": false, 00:15:35.811 "seek_data": false, 00:15:35.811 "copy": true, 00:15:35.811 "nvme_iov_md": false 00:15:35.811 }, 00:15:35.811 "memory_domains": [ 00:15:35.811 { 00:15:35.811 "dma_device_id": "system", 00:15:35.811 "dma_device_type": 1 00:15:35.811 }, 00:15:35.811 { 00:15:35.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.811 "dma_device_type": 2 00:15:35.811 } 00:15:35.811 ], 00:15:35.811 "driver_specific": {} 00:15:35.811 } 00:15:35.811 ] 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.811 [2024-11-20 10:38:39.042906] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.811 [2024-11-20 10:38:39.042995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.811 [2024-11-20 10:38:39.043036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.811 [2024-11-20 10:38:39.044875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.811 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.812 "name": "Existed_Raid", 00:15:35.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.812 "strip_size_kb": 64, 00:15:35.812 "state": "configuring", 00:15:35.812 "raid_level": "raid5f", 00:15:35.812 "superblock": false, 00:15:35.812 "num_base_bdevs": 3, 00:15:35.812 "num_base_bdevs_discovered": 2, 00:15:35.812 "num_base_bdevs_operational": 3, 00:15:35.812 "base_bdevs_list": [ 00:15:35.812 { 00:15:35.812 "name": "BaseBdev1", 00:15:35.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.812 "is_configured": false, 00:15:35.812 "data_offset": 0, 00:15:35.812 "data_size": 0 00:15:35.812 }, 00:15:35.812 { 00:15:35.812 "name": "BaseBdev2", 00:15:35.812 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:35.812 "is_configured": true, 00:15:35.812 "data_offset": 0, 00:15:35.812 "data_size": 65536 00:15:35.812 }, 00:15:35.812 { 00:15:35.812 "name": "BaseBdev3", 00:15:35.812 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:35.812 "is_configured": true, 00:15:35.812 "data_offset": 0, 00:15:35.812 "data_size": 65536 00:15:35.812 } 00:15:35.812 ] 00:15:35.812 }' 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.812 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.071 [2024-11-20 10:38:39.426254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.071 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.071 "name": "Existed_Raid", 00:15:36.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.071 "strip_size_kb": 64, 00:15:36.071 "state": "configuring", 00:15:36.071 "raid_level": "raid5f", 00:15:36.071 "superblock": false, 00:15:36.071 "num_base_bdevs": 3, 00:15:36.071 "num_base_bdevs_discovered": 1, 00:15:36.071 "num_base_bdevs_operational": 3, 00:15:36.071 "base_bdevs_list": [ 00:15:36.071 { 00:15:36.071 "name": "BaseBdev1", 00:15:36.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.072 "is_configured": false, 00:15:36.072 "data_offset": 0, 00:15:36.072 "data_size": 0 00:15:36.072 }, 00:15:36.072 { 00:15:36.072 "name": null, 00:15:36.072 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:36.072 "is_configured": false, 00:15:36.072 "data_offset": 0, 00:15:36.072 "data_size": 65536 00:15:36.072 }, 00:15:36.072 { 00:15:36.072 "name": "BaseBdev3", 00:15:36.072 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:36.072 "is_configured": true, 00:15:36.072 "data_offset": 0, 00:15:36.072 "data_size": 65536 00:15:36.072 } 00:15:36.072 ] 00:15:36.072 }' 00:15:36.072 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.072 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.643 10:38:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.643 [2024-11-20 10:38:40.009570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.643 BaseBdev1 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.643 [ 00:15:36.643 { 00:15:36.643 "name": "BaseBdev1", 00:15:36.643 "aliases": [ 00:15:36.643 "fc7c506e-69c8-4d39-8807-0e76e28fee20" 00:15:36.643 ], 00:15:36.643 "product_name": "Malloc disk", 00:15:36.643 "block_size": 512, 00:15:36.643 "num_blocks": 65536, 00:15:36.643 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:36.643 "assigned_rate_limits": { 00:15:36.643 "rw_ios_per_sec": 0, 00:15:36.643 "rw_mbytes_per_sec": 0, 00:15:36.643 "r_mbytes_per_sec": 0, 00:15:36.643 "w_mbytes_per_sec": 0 00:15:36.643 }, 00:15:36.643 "claimed": true, 00:15:36.643 "claim_type": "exclusive_write", 00:15:36.643 "zoned": false, 00:15:36.643 "supported_io_types": { 00:15:36.643 "read": true, 00:15:36.643 "write": true, 00:15:36.643 "unmap": true, 00:15:36.643 "flush": true, 00:15:36.643 "reset": true, 00:15:36.643 "nvme_admin": false, 00:15:36.643 "nvme_io": false, 00:15:36.643 "nvme_io_md": false, 00:15:36.643 "write_zeroes": true, 00:15:36.643 "zcopy": true, 00:15:36.643 "get_zone_info": false, 00:15:36.643 "zone_management": false, 00:15:36.643 "zone_append": false, 00:15:36.643 "compare": false, 00:15:36.643 "compare_and_write": false, 00:15:36.643 "abort": true, 00:15:36.643 "seek_hole": false, 00:15:36.643 "seek_data": false, 00:15:36.643 "copy": true, 00:15:36.643 "nvme_iov_md": false 00:15:36.643 }, 00:15:36.643 "memory_domains": [ 00:15:36.643 { 00:15:36.643 "dma_device_id": "system", 00:15:36.643 "dma_device_type": 1 00:15:36.643 }, 00:15:36.643 { 00:15:36.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.643 "dma_device_type": 2 00:15:36.643 } 00:15:36.643 ], 00:15:36.643 "driver_specific": {} 00:15:36.643 } 00:15:36.643 ] 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.643 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.643 "name": "Existed_Raid", 00:15:36.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.643 "strip_size_kb": 64, 00:15:36.643 "state": "configuring", 00:15:36.643 "raid_level": "raid5f", 00:15:36.643 "superblock": false, 00:15:36.643 "num_base_bdevs": 3, 00:15:36.643 "num_base_bdevs_discovered": 2, 00:15:36.643 "num_base_bdevs_operational": 3, 00:15:36.643 "base_bdevs_list": [ 00:15:36.643 { 00:15:36.643 "name": "BaseBdev1", 00:15:36.643 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:36.643 "is_configured": true, 00:15:36.643 "data_offset": 0, 00:15:36.643 "data_size": 65536 00:15:36.643 }, 00:15:36.643 { 00:15:36.643 "name": null, 00:15:36.643 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:36.643 "is_configured": false, 00:15:36.644 "data_offset": 0, 00:15:36.644 "data_size": 65536 00:15:36.644 }, 00:15:36.644 { 00:15:36.644 "name": "BaseBdev3", 00:15:36.644 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:36.644 "is_configured": true, 00:15:36.644 "data_offset": 0, 00:15:36.644 "data_size": 65536 00:15:36.644 } 00:15:36.644 ] 00:15:36.644 }' 00:15:36.644 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.644 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 [2024-11-20 10:38:40.536752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.212 "name": "Existed_Raid", 00:15:37.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.212 "strip_size_kb": 64, 00:15:37.212 "state": "configuring", 00:15:37.212 "raid_level": "raid5f", 00:15:37.212 "superblock": false, 00:15:37.212 "num_base_bdevs": 3, 00:15:37.212 "num_base_bdevs_discovered": 1, 00:15:37.212 "num_base_bdevs_operational": 3, 00:15:37.212 "base_bdevs_list": [ 00:15:37.212 { 00:15:37.212 "name": "BaseBdev1", 00:15:37.212 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:37.212 "is_configured": true, 00:15:37.212 "data_offset": 0, 00:15:37.212 "data_size": 65536 00:15:37.212 }, 00:15:37.212 { 00:15:37.212 "name": null, 00:15:37.212 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:37.212 "is_configured": false, 00:15:37.212 "data_offset": 0, 00:15:37.212 "data_size": 65536 00:15:37.212 }, 00:15:37.212 { 00:15:37.212 "name": null, 00:15:37.212 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:37.212 "is_configured": false, 00:15:37.212 "data_offset": 0, 00:15:37.212 "data_size": 65536 00:15:37.212 } 00:15:37.212 ] 00:15:37.212 }' 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.212 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.471 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.471 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.471 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.471 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.731 [2024-11-20 10:38:40.980006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.731 10:38:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.731 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.731 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.731 "name": "Existed_Raid", 00:15:37.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.731 "strip_size_kb": 64, 00:15:37.731 "state": "configuring", 00:15:37.731 "raid_level": "raid5f", 00:15:37.732 "superblock": false, 00:15:37.732 "num_base_bdevs": 3, 00:15:37.732 "num_base_bdevs_discovered": 2, 00:15:37.732 "num_base_bdevs_operational": 3, 00:15:37.732 "base_bdevs_list": [ 00:15:37.732 { 00:15:37.732 "name": "BaseBdev1", 00:15:37.732 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:37.732 "is_configured": true, 00:15:37.732 "data_offset": 0, 00:15:37.732 "data_size": 65536 00:15:37.732 }, 00:15:37.732 { 00:15:37.732 "name": null, 00:15:37.732 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:37.732 "is_configured": false, 00:15:37.732 "data_offset": 0, 00:15:37.732 "data_size": 65536 00:15:37.732 }, 00:15:37.732 { 00:15:37.732 "name": "BaseBdev3", 00:15:37.732 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:37.732 "is_configured": true, 00:15:37.732 "data_offset": 0, 00:15:37.732 "data_size": 65536 00:15:37.732 } 00:15:37.732 ] 00:15:37.732 }' 00:15:37.732 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.732 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.991 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.991 [2024-11-20 10:38:41.415383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.250 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.250 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.250 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.250 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.250 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.250 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.251 "name": "Existed_Raid", 00:15:38.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.251 "strip_size_kb": 64, 00:15:38.251 "state": "configuring", 00:15:38.251 "raid_level": "raid5f", 00:15:38.251 "superblock": false, 00:15:38.251 "num_base_bdevs": 3, 00:15:38.251 "num_base_bdevs_discovered": 1, 00:15:38.251 "num_base_bdevs_operational": 3, 00:15:38.251 "base_bdevs_list": [ 00:15:38.251 { 00:15:38.251 "name": null, 00:15:38.251 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:38.251 "is_configured": false, 00:15:38.251 "data_offset": 0, 00:15:38.251 "data_size": 65536 00:15:38.251 }, 00:15:38.251 { 00:15:38.251 "name": null, 00:15:38.251 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:38.251 "is_configured": false, 00:15:38.251 "data_offset": 0, 00:15:38.251 "data_size": 65536 00:15:38.251 }, 00:15:38.251 { 00:15:38.251 "name": "BaseBdev3", 00:15:38.251 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:38.251 "is_configured": true, 00:15:38.251 "data_offset": 0, 00:15:38.251 "data_size": 65536 00:15:38.251 } 00:15:38.251 ] 00:15:38.251 }' 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.251 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.510 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.510 10:38:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.510 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.510 10:38:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.769 [2024-11-20 10:38:42.031347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.769 "name": "Existed_Raid", 00:15:38.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.769 "strip_size_kb": 64, 00:15:38.769 "state": "configuring", 00:15:38.769 "raid_level": "raid5f", 00:15:38.769 "superblock": false, 00:15:38.769 "num_base_bdevs": 3, 00:15:38.769 "num_base_bdevs_discovered": 2, 00:15:38.769 "num_base_bdevs_operational": 3, 00:15:38.769 "base_bdevs_list": [ 00:15:38.769 { 00:15:38.769 "name": null, 00:15:38.769 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:38.769 "is_configured": false, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 65536 00:15:38.769 }, 00:15:38.769 { 00:15:38.769 "name": "BaseBdev2", 00:15:38.769 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:38.769 "is_configured": true, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 65536 00:15:38.769 }, 00:15:38.769 { 00:15:38.769 "name": "BaseBdev3", 00:15:38.769 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:38.769 "is_configured": true, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 65536 00:15:38.769 } 00:15:38.769 ] 00:15:38.769 }' 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.769 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.028 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc7c506e-69c8-4d39-8807-0e76e28fee20 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 [2024-11-20 10:38:42.559955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:39.289 [2024-11-20 10:38:42.560064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:39.289 [2024-11-20 10:38:42.560093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:39.289 [2024-11-20 10:38:42.560427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:39.289 [2024-11-20 10:38:42.566263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:39.289 [2024-11-20 10:38:42.566327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:39.289 [2024-11-20 10:38:42.566671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.289 NewBaseBdev 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 [ 00:15:39.289 { 00:15:39.289 "name": "NewBaseBdev", 00:15:39.289 "aliases": [ 00:15:39.289 "fc7c506e-69c8-4d39-8807-0e76e28fee20" 00:15:39.289 ], 00:15:39.289 "product_name": "Malloc disk", 00:15:39.289 "block_size": 512, 00:15:39.289 "num_blocks": 65536, 00:15:39.289 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:39.289 "assigned_rate_limits": { 00:15:39.289 "rw_ios_per_sec": 0, 00:15:39.289 "rw_mbytes_per_sec": 0, 00:15:39.289 "r_mbytes_per_sec": 0, 00:15:39.289 "w_mbytes_per_sec": 0 00:15:39.289 }, 00:15:39.289 "claimed": true, 00:15:39.289 "claim_type": "exclusive_write", 00:15:39.289 "zoned": false, 00:15:39.289 "supported_io_types": { 00:15:39.289 "read": true, 00:15:39.289 "write": true, 00:15:39.289 "unmap": true, 00:15:39.289 "flush": true, 00:15:39.289 "reset": true, 00:15:39.289 "nvme_admin": false, 00:15:39.289 "nvme_io": false, 00:15:39.289 "nvme_io_md": false, 00:15:39.289 "write_zeroes": true, 00:15:39.289 "zcopy": true, 00:15:39.289 "get_zone_info": false, 00:15:39.289 "zone_management": false, 00:15:39.289 "zone_append": false, 00:15:39.289 "compare": false, 00:15:39.289 "compare_and_write": false, 00:15:39.289 "abort": true, 00:15:39.289 "seek_hole": false, 00:15:39.289 "seek_data": false, 00:15:39.289 "copy": true, 00:15:39.289 "nvme_iov_md": false 00:15:39.289 }, 00:15:39.289 "memory_domains": [ 00:15:39.289 { 00:15:39.289 "dma_device_id": "system", 00:15:39.289 "dma_device_type": 1 00:15:39.289 }, 00:15:39.289 { 00:15:39.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.289 "dma_device_type": 2 00:15:39.289 } 00:15:39.289 ], 00:15:39.289 "driver_specific": {} 00:15:39.289 } 00:15:39.289 ] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.289 "name": "Existed_Raid", 00:15:39.289 "uuid": "2a8e41fc-69b8-45c1-8fc1-7d507671653b", 00:15:39.289 "strip_size_kb": 64, 00:15:39.289 "state": "online", 00:15:39.289 "raid_level": "raid5f", 00:15:39.289 "superblock": false, 00:15:39.289 "num_base_bdevs": 3, 00:15:39.289 "num_base_bdevs_discovered": 3, 00:15:39.289 "num_base_bdevs_operational": 3, 00:15:39.289 "base_bdevs_list": [ 00:15:39.289 { 00:15:39.289 "name": "NewBaseBdev", 00:15:39.289 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 0, 00:15:39.289 "data_size": 65536 00:15:39.289 }, 00:15:39.289 { 00:15:39.289 "name": "BaseBdev2", 00:15:39.289 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 0, 00:15:39.289 "data_size": 65536 00:15:39.289 }, 00:15:39.289 { 00:15:39.289 "name": "BaseBdev3", 00:15:39.289 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 0, 00:15:39.289 "data_size": 65536 00:15:39.289 } 00:15:39.289 ] 00:15:39.289 }' 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.289 10:38:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.860 [2024-11-20 10:38:43.081052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.860 "name": "Existed_Raid", 00:15:39.860 "aliases": [ 00:15:39.860 "2a8e41fc-69b8-45c1-8fc1-7d507671653b" 00:15:39.860 ], 00:15:39.860 "product_name": "Raid Volume", 00:15:39.860 "block_size": 512, 00:15:39.860 "num_blocks": 131072, 00:15:39.860 "uuid": "2a8e41fc-69b8-45c1-8fc1-7d507671653b", 00:15:39.860 "assigned_rate_limits": { 00:15:39.860 "rw_ios_per_sec": 0, 00:15:39.860 "rw_mbytes_per_sec": 0, 00:15:39.860 "r_mbytes_per_sec": 0, 00:15:39.860 "w_mbytes_per_sec": 0 00:15:39.860 }, 00:15:39.860 "claimed": false, 00:15:39.860 "zoned": false, 00:15:39.860 "supported_io_types": { 00:15:39.860 "read": true, 00:15:39.860 "write": true, 00:15:39.860 "unmap": false, 00:15:39.860 "flush": false, 00:15:39.860 "reset": true, 00:15:39.860 "nvme_admin": false, 00:15:39.860 "nvme_io": false, 00:15:39.860 "nvme_io_md": false, 00:15:39.860 "write_zeroes": true, 00:15:39.860 "zcopy": false, 00:15:39.860 "get_zone_info": false, 00:15:39.860 "zone_management": false, 00:15:39.860 "zone_append": false, 00:15:39.860 "compare": false, 00:15:39.860 "compare_and_write": false, 00:15:39.860 "abort": false, 00:15:39.860 "seek_hole": false, 00:15:39.860 "seek_data": false, 00:15:39.860 "copy": false, 00:15:39.860 "nvme_iov_md": false 00:15:39.860 }, 00:15:39.860 "driver_specific": { 00:15:39.860 "raid": { 00:15:39.860 "uuid": "2a8e41fc-69b8-45c1-8fc1-7d507671653b", 00:15:39.860 "strip_size_kb": 64, 00:15:39.860 "state": "online", 00:15:39.860 "raid_level": "raid5f", 00:15:39.860 "superblock": false, 00:15:39.860 "num_base_bdevs": 3, 00:15:39.860 "num_base_bdevs_discovered": 3, 00:15:39.860 "num_base_bdevs_operational": 3, 00:15:39.860 "base_bdevs_list": [ 00:15:39.860 { 00:15:39.860 "name": "NewBaseBdev", 00:15:39.860 "uuid": "fc7c506e-69c8-4d39-8807-0e76e28fee20", 00:15:39.860 "is_configured": true, 00:15:39.860 "data_offset": 0, 00:15:39.860 "data_size": 65536 00:15:39.860 }, 00:15:39.860 { 00:15:39.860 "name": "BaseBdev2", 00:15:39.860 "uuid": "e09e996c-ffc2-4f5c-ba4a-83567dc70643", 00:15:39.860 "is_configured": true, 00:15:39.860 "data_offset": 0, 00:15:39.860 "data_size": 65536 00:15:39.860 }, 00:15:39.860 { 00:15:39.860 "name": "BaseBdev3", 00:15:39.860 "uuid": "6da6266d-c421-4789-aa43-231e6126cdad", 00:15:39.860 "is_configured": true, 00:15:39.860 "data_offset": 0, 00:15:39.860 "data_size": 65536 00:15:39.860 } 00:15:39.860 ] 00:15:39.860 } 00:15:39.860 } 00:15:39.860 }' 00:15:39.860 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:39.861 BaseBdev2 00:15:39.861 BaseBdev3' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.861 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.120 [2024-11-20 10:38:43.368427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.120 [2024-11-20 10:38:43.368502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.120 [2024-11-20 10:38:43.368606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.120 [2024-11-20 10:38:43.368914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.120 [2024-11-20 10:38:43.368974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80056 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80056 ']' 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80056 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80056 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.120 killing process with pid 80056 00:15:40.120 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.121 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80056' 00:15:40.121 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80056 00:15:40.121 [2024-11-20 10:38:43.418430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.121 10:38:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80056 00:15:40.379 [2024-11-20 10:38:43.723400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:41.758 00:15:41.758 real 0m10.578s 00:15:41.758 user 0m16.758s 00:15:41.758 sys 0m1.926s 00:15:41.758 ************************************ 00:15:41.758 END TEST raid5f_state_function_test 00:15:41.758 ************************************ 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.758 10:38:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:41.758 10:38:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:41.758 10:38:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.758 10:38:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.758 ************************************ 00:15:41.758 START TEST raid5f_state_function_test_sb 00:15:41.758 ************************************ 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:41.758 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80677 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:41.759 Process raid pid: 80677 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80677' 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80677 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80677 ']' 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.759 10:38:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.759 [2024-11-20 10:38:45.021683] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:15:41.759 [2024-11-20 10:38:45.021905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.759 [2024-11-20 10:38:45.199575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.043 [2024-11-20 10:38:45.325853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.302 [2024-11-20 10:38:45.514538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.302 [2024-11-20 10:38:45.514670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 [2024-11-20 10:38:45.846410] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.562 [2024-11-20 10:38:45.846496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.562 [2024-11-20 10:38:45.846541] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.562 [2024-11-20 10:38:45.846565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.562 [2024-11-20 10:38:45.846584] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.562 [2024-11-20 10:38:45.846605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.562 "name": "Existed_Raid", 00:15:42.562 "uuid": "750f64d0-fc5c-44e5-9a78-ec0f5ea98218", 00:15:42.562 "strip_size_kb": 64, 00:15:42.562 "state": "configuring", 00:15:42.562 "raid_level": "raid5f", 00:15:42.562 "superblock": true, 00:15:42.562 "num_base_bdevs": 3, 00:15:42.562 "num_base_bdevs_discovered": 0, 00:15:42.562 "num_base_bdevs_operational": 3, 00:15:42.562 "base_bdevs_list": [ 00:15:42.562 { 00:15:42.562 "name": "BaseBdev1", 00:15:42.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.562 "is_configured": false, 00:15:42.562 "data_offset": 0, 00:15:42.562 "data_size": 0 00:15:42.562 }, 00:15:42.562 { 00:15:42.562 "name": "BaseBdev2", 00:15:42.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.562 "is_configured": false, 00:15:42.562 "data_offset": 0, 00:15:42.562 "data_size": 0 00:15:42.562 }, 00:15:42.562 { 00:15:42.562 "name": "BaseBdev3", 00:15:42.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.562 "is_configured": false, 00:15:42.562 "data_offset": 0, 00:15:42.562 "data_size": 0 00:15:42.562 } 00:15:42.562 ] 00:15:42.562 }' 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.562 10:38:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.131 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.131 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.131 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.131 [2024-11-20 10:38:46.341488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.132 [2024-11-20 10:38:46.341581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.132 [2024-11-20 10:38:46.353463] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.132 [2024-11-20 10:38:46.353540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.132 [2024-11-20 10:38:46.353568] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.132 [2024-11-20 10:38:46.353590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.132 [2024-11-20 10:38:46.353607] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.132 [2024-11-20 10:38:46.353627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.132 [2024-11-20 10:38:46.400237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.132 BaseBdev1 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.132 [ 00:15:43.132 { 00:15:43.132 "name": "BaseBdev1", 00:15:43.132 "aliases": [ 00:15:43.132 "f02a672e-9f00-41f6-9a28-27560a51b43b" 00:15:43.132 ], 00:15:43.132 "product_name": "Malloc disk", 00:15:43.132 "block_size": 512, 00:15:43.132 "num_blocks": 65536, 00:15:43.132 "uuid": "f02a672e-9f00-41f6-9a28-27560a51b43b", 00:15:43.132 "assigned_rate_limits": { 00:15:43.132 "rw_ios_per_sec": 0, 00:15:43.132 "rw_mbytes_per_sec": 0, 00:15:43.132 "r_mbytes_per_sec": 0, 00:15:43.132 "w_mbytes_per_sec": 0 00:15:43.132 }, 00:15:43.132 "claimed": true, 00:15:43.132 "claim_type": "exclusive_write", 00:15:43.132 "zoned": false, 00:15:43.132 "supported_io_types": { 00:15:43.132 "read": true, 00:15:43.132 "write": true, 00:15:43.132 "unmap": true, 00:15:43.132 "flush": true, 00:15:43.132 "reset": true, 00:15:43.132 "nvme_admin": false, 00:15:43.132 "nvme_io": false, 00:15:43.132 "nvme_io_md": false, 00:15:43.132 "write_zeroes": true, 00:15:43.132 "zcopy": true, 00:15:43.132 "get_zone_info": false, 00:15:43.132 "zone_management": false, 00:15:43.132 "zone_append": false, 00:15:43.132 "compare": false, 00:15:43.132 "compare_and_write": false, 00:15:43.132 "abort": true, 00:15:43.132 "seek_hole": false, 00:15:43.132 "seek_data": false, 00:15:43.132 "copy": true, 00:15:43.132 "nvme_iov_md": false 00:15:43.132 }, 00:15:43.132 "memory_domains": [ 00:15:43.132 { 00:15:43.132 "dma_device_id": "system", 00:15:43.132 "dma_device_type": 1 00:15:43.132 }, 00:15:43.132 { 00:15:43.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.132 "dma_device_type": 2 00:15:43.132 } 00:15:43.132 ], 00:15:43.132 "driver_specific": {} 00:15:43.132 } 00:15:43.132 ] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.132 "name": "Existed_Raid", 00:15:43.132 "uuid": "caa38659-36e5-4426-b269-2a3f747e007c", 00:15:43.132 "strip_size_kb": 64, 00:15:43.132 "state": "configuring", 00:15:43.132 "raid_level": "raid5f", 00:15:43.132 "superblock": true, 00:15:43.132 "num_base_bdevs": 3, 00:15:43.132 "num_base_bdevs_discovered": 1, 00:15:43.132 "num_base_bdevs_operational": 3, 00:15:43.132 "base_bdevs_list": [ 00:15:43.132 { 00:15:43.132 "name": "BaseBdev1", 00:15:43.132 "uuid": "f02a672e-9f00-41f6-9a28-27560a51b43b", 00:15:43.132 "is_configured": true, 00:15:43.132 "data_offset": 2048, 00:15:43.132 "data_size": 63488 00:15:43.132 }, 00:15:43.132 { 00:15:43.132 "name": "BaseBdev2", 00:15:43.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.132 "is_configured": false, 00:15:43.132 "data_offset": 0, 00:15:43.132 "data_size": 0 00:15:43.132 }, 00:15:43.132 { 00:15:43.132 "name": "BaseBdev3", 00:15:43.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.132 "is_configured": false, 00:15:43.132 "data_offset": 0, 00:15:43.132 "data_size": 0 00:15:43.132 } 00:15:43.132 ] 00:15:43.132 }' 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.132 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.703 [2024-11-20 10:38:46.875494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.703 [2024-11-20 10:38:46.875607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.703 [2024-11-20 10:38:46.887563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.703 [2024-11-20 10:38:46.889399] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.703 [2024-11-20 10:38:46.889489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.703 [2024-11-20 10:38:46.889522] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.703 [2024-11-20 10:38:46.889562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.703 "name": "Existed_Raid", 00:15:43.703 "uuid": "68e9ff33-e8e7-4b82-a370-4d4789404e52", 00:15:43.703 "strip_size_kb": 64, 00:15:43.703 "state": "configuring", 00:15:43.703 "raid_level": "raid5f", 00:15:43.703 "superblock": true, 00:15:43.703 "num_base_bdevs": 3, 00:15:43.703 "num_base_bdevs_discovered": 1, 00:15:43.703 "num_base_bdevs_operational": 3, 00:15:43.703 "base_bdevs_list": [ 00:15:43.703 { 00:15:43.703 "name": "BaseBdev1", 00:15:43.703 "uuid": "f02a672e-9f00-41f6-9a28-27560a51b43b", 00:15:43.703 "is_configured": true, 00:15:43.703 "data_offset": 2048, 00:15:43.703 "data_size": 63488 00:15:43.703 }, 00:15:43.703 { 00:15:43.703 "name": "BaseBdev2", 00:15:43.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.703 "is_configured": false, 00:15:43.703 "data_offset": 0, 00:15:43.703 "data_size": 0 00:15:43.703 }, 00:15:43.703 { 00:15:43.703 "name": "BaseBdev3", 00:15:43.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.703 "is_configured": false, 00:15:43.703 "data_offset": 0, 00:15:43.703 "data_size": 0 00:15:43.703 } 00:15:43.703 ] 00:15:43.703 }' 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.703 10:38:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.963 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.963 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.963 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.964 [2024-11-20 10:38:47.346942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.964 BaseBdev2 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.964 [ 00:15:43.964 { 00:15:43.964 "name": "BaseBdev2", 00:15:43.964 "aliases": [ 00:15:43.964 "13c2e8a5-1e9f-4079-93cc-b6d8937d0a74" 00:15:43.964 ], 00:15:43.964 "product_name": "Malloc disk", 00:15:43.964 "block_size": 512, 00:15:43.964 "num_blocks": 65536, 00:15:43.964 "uuid": "13c2e8a5-1e9f-4079-93cc-b6d8937d0a74", 00:15:43.964 "assigned_rate_limits": { 00:15:43.964 "rw_ios_per_sec": 0, 00:15:43.964 "rw_mbytes_per_sec": 0, 00:15:43.964 "r_mbytes_per_sec": 0, 00:15:43.964 "w_mbytes_per_sec": 0 00:15:43.964 }, 00:15:43.964 "claimed": true, 00:15:43.964 "claim_type": "exclusive_write", 00:15:43.964 "zoned": false, 00:15:43.964 "supported_io_types": { 00:15:43.964 "read": true, 00:15:43.964 "write": true, 00:15:43.964 "unmap": true, 00:15:43.964 "flush": true, 00:15:43.964 "reset": true, 00:15:43.964 "nvme_admin": false, 00:15:43.964 "nvme_io": false, 00:15:43.964 "nvme_io_md": false, 00:15:43.964 "write_zeroes": true, 00:15:43.964 "zcopy": true, 00:15:43.964 "get_zone_info": false, 00:15:43.964 "zone_management": false, 00:15:43.964 "zone_append": false, 00:15:43.964 "compare": false, 00:15:43.964 "compare_and_write": false, 00:15:43.964 "abort": true, 00:15:43.964 "seek_hole": false, 00:15:43.964 "seek_data": false, 00:15:43.964 "copy": true, 00:15:43.964 "nvme_iov_md": false 00:15:43.964 }, 00:15:43.964 "memory_domains": [ 00:15:43.964 { 00:15:43.964 "dma_device_id": "system", 00:15:43.964 "dma_device_type": 1 00:15:43.964 }, 00:15:43.964 { 00:15:43.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.964 "dma_device_type": 2 00:15:43.964 } 00:15:43.964 ], 00:15:43.964 "driver_specific": {} 00:15:43.964 } 00:15:43.964 ] 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.964 "name": "Existed_Raid", 00:15:43.964 "uuid": "68e9ff33-e8e7-4b82-a370-4d4789404e52", 00:15:43.964 "strip_size_kb": 64, 00:15:43.964 "state": "configuring", 00:15:43.964 "raid_level": "raid5f", 00:15:43.964 "superblock": true, 00:15:43.964 "num_base_bdevs": 3, 00:15:43.964 "num_base_bdevs_discovered": 2, 00:15:43.964 "num_base_bdevs_operational": 3, 00:15:43.964 "base_bdevs_list": [ 00:15:43.964 { 00:15:43.964 "name": "BaseBdev1", 00:15:43.964 "uuid": "f02a672e-9f00-41f6-9a28-27560a51b43b", 00:15:43.964 "is_configured": true, 00:15:43.964 "data_offset": 2048, 00:15:43.964 "data_size": 63488 00:15:43.964 }, 00:15:43.964 { 00:15:43.964 "name": "BaseBdev2", 00:15:43.964 "uuid": "13c2e8a5-1e9f-4079-93cc-b6d8937d0a74", 00:15:43.964 "is_configured": true, 00:15:43.964 "data_offset": 2048, 00:15:43.964 "data_size": 63488 00:15:43.964 }, 00:15:43.964 { 00:15:43.964 "name": "BaseBdev3", 00:15:43.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.964 "is_configured": false, 00:15:43.964 "data_offset": 0, 00:15:43.964 "data_size": 0 00:15:43.964 } 00:15:43.964 ] 00:15:43.964 }' 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.964 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.559 [2024-11-20 10:38:47.881699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.559 [2024-11-20 10:38:47.882094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:44.559 [2024-11-20 10:38:47.882162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.559 [2024-11-20 10:38:47.882462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:44.559 BaseBdev3 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.559 [2024-11-20 10:38:47.888079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:44.559 [2024-11-20 10:38:47.888150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:44.559 [2024-11-20 10:38:47.888403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.559 [ 00:15:44.559 { 00:15:44.559 "name": "BaseBdev3", 00:15:44.559 "aliases": [ 00:15:44.559 "16371298-f7bb-405a-8ae8-88a0cac79783" 00:15:44.559 ], 00:15:44.559 "product_name": "Malloc disk", 00:15:44.559 "block_size": 512, 00:15:44.559 "num_blocks": 65536, 00:15:44.559 "uuid": "16371298-f7bb-405a-8ae8-88a0cac79783", 00:15:44.559 "assigned_rate_limits": { 00:15:44.559 "rw_ios_per_sec": 0, 00:15:44.559 "rw_mbytes_per_sec": 0, 00:15:44.559 "r_mbytes_per_sec": 0, 00:15:44.559 "w_mbytes_per_sec": 0 00:15:44.559 }, 00:15:44.559 "claimed": true, 00:15:44.559 "claim_type": "exclusive_write", 00:15:44.559 "zoned": false, 00:15:44.559 "supported_io_types": { 00:15:44.559 "read": true, 00:15:44.559 "write": true, 00:15:44.559 "unmap": true, 00:15:44.559 "flush": true, 00:15:44.559 "reset": true, 00:15:44.559 "nvme_admin": false, 00:15:44.559 "nvme_io": false, 00:15:44.559 "nvme_io_md": false, 00:15:44.559 "write_zeroes": true, 00:15:44.559 "zcopy": true, 00:15:44.559 "get_zone_info": false, 00:15:44.559 "zone_management": false, 00:15:44.559 "zone_append": false, 00:15:44.559 "compare": false, 00:15:44.559 "compare_and_write": false, 00:15:44.559 "abort": true, 00:15:44.559 "seek_hole": false, 00:15:44.559 "seek_data": false, 00:15:44.559 "copy": true, 00:15:44.559 "nvme_iov_md": false 00:15:44.559 }, 00:15:44.559 "memory_domains": [ 00:15:44.559 { 00:15:44.559 "dma_device_id": "system", 00:15:44.559 "dma_device_type": 1 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.559 "dma_device_type": 2 00:15:44.559 } 00:15:44.559 ], 00:15:44.559 "driver_specific": {} 00:15:44.559 } 00:15:44.559 ] 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.559 "name": "Existed_Raid", 00:15:44.559 "uuid": "68e9ff33-e8e7-4b82-a370-4d4789404e52", 00:15:44.559 "strip_size_kb": 64, 00:15:44.559 "state": "online", 00:15:44.559 "raid_level": "raid5f", 00:15:44.559 "superblock": true, 00:15:44.559 "num_base_bdevs": 3, 00:15:44.559 "num_base_bdevs_discovered": 3, 00:15:44.559 "num_base_bdevs_operational": 3, 00:15:44.559 "base_bdevs_list": [ 00:15:44.559 { 00:15:44.559 "name": "BaseBdev1", 00:15:44.559 "uuid": "f02a672e-9f00-41f6-9a28-27560a51b43b", 00:15:44.559 "is_configured": true, 00:15:44.559 "data_offset": 2048, 00:15:44.559 "data_size": 63488 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "name": "BaseBdev2", 00:15:44.559 "uuid": "13c2e8a5-1e9f-4079-93cc-b6d8937d0a74", 00:15:44.559 "is_configured": true, 00:15:44.559 "data_offset": 2048, 00:15:44.559 "data_size": 63488 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "name": "BaseBdev3", 00:15:44.559 "uuid": "16371298-f7bb-405a-8ae8-88a0cac79783", 00:15:44.559 "is_configured": true, 00:15:44.559 "data_offset": 2048, 00:15:44.559 "data_size": 63488 00:15:44.559 } 00:15:44.559 ] 00:15:44.559 }' 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.559 10:38:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.135 [2024-11-20 10:38:48.374165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.135 "name": "Existed_Raid", 00:15:45.135 "aliases": [ 00:15:45.135 "68e9ff33-e8e7-4b82-a370-4d4789404e52" 00:15:45.135 ], 00:15:45.135 "product_name": "Raid Volume", 00:15:45.135 "block_size": 512, 00:15:45.135 "num_blocks": 126976, 00:15:45.135 "uuid": "68e9ff33-e8e7-4b82-a370-4d4789404e52", 00:15:45.135 "assigned_rate_limits": { 00:15:45.135 "rw_ios_per_sec": 0, 00:15:45.135 "rw_mbytes_per_sec": 0, 00:15:45.135 "r_mbytes_per_sec": 0, 00:15:45.135 "w_mbytes_per_sec": 0 00:15:45.135 }, 00:15:45.135 "claimed": false, 00:15:45.135 "zoned": false, 00:15:45.135 "supported_io_types": { 00:15:45.135 "read": true, 00:15:45.135 "write": true, 00:15:45.135 "unmap": false, 00:15:45.135 "flush": false, 00:15:45.135 "reset": true, 00:15:45.135 "nvme_admin": false, 00:15:45.135 "nvme_io": false, 00:15:45.135 "nvme_io_md": false, 00:15:45.135 "write_zeroes": true, 00:15:45.135 "zcopy": false, 00:15:45.135 "get_zone_info": false, 00:15:45.135 "zone_management": false, 00:15:45.135 "zone_append": false, 00:15:45.135 "compare": false, 00:15:45.135 "compare_and_write": false, 00:15:45.135 "abort": false, 00:15:45.135 "seek_hole": false, 00:15:45.135 "seek_data": false, 00:15:45.135 "copy": false, 00:15:45.135 "nvme_iov_md": false 00:15:45.135 }, 00:15:45.135 "driver_specific": { 00:15:45.135 "raid": { 00:15:45.135 "uuid": "68e9ff33-e8e7-4b82-a370-4d4789404e52", 00:15:45.135 "strip_size_kb": 64, 00:15:45.135 "state": "online", 00:15:45.135 "raid_level": "raid5f", 00:15:45.135 "superblock": true, 00:15:45.135 "num_base_bdevs": 3, 00:15:45.135 "num_base_bdevs_discovered": 3, 00:15:45.135 "num_base_bdevs_operational": 3, 00:15:45.135 "base_bdevs_list": [ 00:15:45.135 { 00:15:45.135 "name": "BaseBdev1", 00:15:45.135 "uuid": "f02a672e-9f00-41f6-9a28-27560a51b43b", 00:15:45.135 "is_configured": true, 00:15:45.135 "data_offset": 2048, 00:15:45.135 "data_size": 63488 00:15:45.135 }, 00:15:45.135 { 00:15:45.135 "name": "BaseBdev2", 00:15:45.135 "uuid": "13c2e8a5-1e9f-4079-93cc-b6d8937d0a74", 00:15:45.135 "is_configured": true, 00:15:45.135 "data_offset": 2048, 00:15:45.135 "data_size": 63488 00:15:45.135 }, 00:15:45.135 { 00:15:45.135 "name": "BaseBdev3", 00:15:45.135 "uuid": "16371298-f7bb-405a-8ae8-88a0cac79783", 00:15:45.135 "is_configured": true, 00:15:45.135 "data_offset": 2048, 00:15:45.135 "data_size": 63488 00:15:45.135 } 00:15:45.135 ] 00:15:45.135 } 00:15:45.135 } 00:15:45.135 }' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.135 BaseBdev2 00:15:45.135 BaseBdev3' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.135 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.396 [2024-11-20 10:38:48.657496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.396 "name": "Existed_Raid", 00:15:45.396 "uuid": "68e9ff33-e8e7-4b82-a370-4d4789404e52", 00:15:45.396 "strip_size_kb": 64, 00:15:45.396 "state": "online", 00:15:45.396 "raid_level": "raid5f", 00:15:45.396 "superblock": true, 00:15:45.396 "num_base_bdevs": 3, 00:15:45.396 "num_base_bdevs_discovered": 2, 00:15:45.396 "num_base_bdevs_operational": 2, 00:15:45.396 "base_bdevs_list": [ 00:15:45.396 { 00:15:45.396 "name": null, 00:15:45.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.396 "is_configured": false, 00:15:45.396 "data_offset": 0, 00:15:45.396 "data_size": 63488 00:15:45.396 }, 00:15:45.396 { 00:15:45.396 "name": "BaseBdev2", 00:15:45.396 "uuid": "13c2e8a5-1e9f-4079-93cc-b6d8937d0a74", 00:15:45.396 "is_configured": true, 00:15:45.396 "data_offset": 2048, 00:15:45.396 "data_size": 63488 00:15:45.396 }, 00:15:45.396 { 00:15:45.396 "name": "BaseBdev3", 00:15:45.396 "uuid": "16371298-f7bb-405a-8ae8-88a0cac79783", 00:15:45.396 "is_configured": true, 00:15:45.396 "data_offset": 2048, 00:15:45.396 "data_size": 63488 00:15:45.396 } 00:15:45.396 ] 00:15:45.396 }' 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.396 10:38:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.971 [2024-11-20 10:38:49.266273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.971 [2024-11-20 10:38:49.266436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.971 [2024-11-20 10:38:49.360305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.971 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.971 [2024-11-20 10:38:49.416243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.971 [2024-11-20 10:38:49.416291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:46.232 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 BaseBdev2 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 [ 00:15:46.233 { 00:15:46.233 "name": "BaseBdev2", 00:15:46.233 "aliases": [ 00:15:46.233 "2440a029-469b-41cb-a527-0bc07cf67376" 00:15:46.233 ], 00:15:46.233 "product_name": "Malloc disk", 00:15:46.233 "block_size": 512, 00:15:46.233 "num_blocks": 65536, 00:15:46.233 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:46.233 "assigned_rate_limits": { 00:15:46.233 "rw_ios_per_sec": 0, 00:15:46.233 "rw_mbytes_per_sec": 0, 00:15:46.233 "r_mbytes_per_sec": 0, 00:15:46.233 "w_mbytes_per_sec": 0 00:15:46.233 }, 00:15:46.233 "claimed": false, 00:15:46.233 "zoned": false, 00:15:46.233 "supported_io_types": { 00:15:46.233 "read": true, 00:15:46.233 "write": true, 00:15:46.233 "unmap": true, 00:15:46.233 "flush": true, 00:15:46.233 "reset": true, 00:15:46.233 "nvme_admin": false, 00:15:46.233 "nvme_io": false, 00:15:46.233 "nvme_io_md": false, 00:15:46.233 "write_zeroes": true, 00:15:46.233 "zcopy": true, 00:15:46.233 "get_zone_info": false, 00:15:46.233 "zone_management": false, 00:15:46.233 "zone_append": false, 00:15:46.233 "compare": false, 00:15:46.233 "compare_and_write": false, 00:15:46.233 "abort": true, 00:15:46.233 "seek_hole": false, 00:15:46.233 "seek_data": false, 00:15:46.233 "copy": true, 00:15:46.233 "nvme_iov_md": false 00:15:46.233 }, 00:15:46.233 "memory_domains": [ 00:15:46.233 { 00:15:46.233 "dma_device_id": "system", 00:15:46.233 "dma_device_type": 1 00:15:46.233 }, 00:15:46.233 { 00:15:46.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.233 "dma_device_type": 2 00:15:46.233 } 00:15:46.233 ], 00:15:46.233 "driver_specific": {} 00:15:46.233 } 00:15:46.233 ] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 BaseBdev3 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.233 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.233 [ 00:15:46.233 { 00:15:46.233 "name": "BaseBdev3", 00:15:46.233 "aliases": [ 00:15:46.233 "3bb7c262-d6fe-45d7-8536-eef28f70e80a" 00:15:46.233 ], 00:15:46.233 "product_name": "Malloc disk", 00:15:46.233 "block_size": 512, 00:15:46.233 "num_blocks": 65536, 00:15:46.233 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:46.233 "assigned_rate_limits": { 00:15:46.233 "rw_ios_per_sec": 0, 00:15:46.233 "rw_mbytes_per_sec": 0, 00:15:46.233 "r_mbytes_per_sec": 0, 00:15:46.233 "w_mbytes_per_sec": 0 00:15:46.233 }, 00:15:46.233 "claimed": false, 00:15:46.233 "zoned": false, 00:15:46.233 "supported_io_types": { 00:15:46.233 "read": true, 00:15:46.233 "write": true, 00:15:46.233 "unmap": true, 00:15:46.233 "flush": true, 00:15:46.233 "reset": true, 00:15:46.233 "nvme_admin": false, 00:15:46.233 "nvme_io": false, 00:15:46.494 "nvme_io_md": false, 00:15:46.494 "write_zeroes": true, 00:15:46.494 "zcopy": true, 00:15:46.494 "get_zone_info": false, 00:15:46.494 "zone_management": false, 00:15:46.494 "zone_append": false, 00:15:46.494 "compare": false, 00:15:46.494 "compare_and_write": false, 00:15:46.494 "abort": true, 00:15:46.494 "seek_hole": false, 00:15:46.494 "seek_data": false, 00:15:46.494 "copy": true, 00:15:46.494 "nvme_iov_md": false 00:15:46.494 }, 00:15:46.494 "memory_domains": [ 00:15:46.494 { 00:15:46.494 "dma_device_id": "system", 00:15:46.494 "dma_device_type": 1 00:15:46.494 }, 00:15:46.494 { 00:15:46.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.494 "dma_device_type": 2 00:15:46.494 } 00:15:46.494 ], 00:15:46.494 "driver_specific": {} 00:15:46.494 } 00:15:46.494 ] 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.494 [2024-11-20 10:38:49.722941] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.494 [2024-11-20 10:38:49.723033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.494 [2024-11-20 10:38:49.723075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.494 [2024-11-20 10:38:49.724914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.494 "name": "Existed_Raid", 00:15:46.494 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:46.494 "strip_size_kb": 64, 00:15:46.494 "state": "configuring", 00:15:46.494 "raid_level": "raid5f", 00:15:46.494 "superblock": true, 00:15:46.494 "num_base_bdevs": 3, 00:15:46.494 "num_base_bdevs_discovered": 2, 00:15:46.494 "num_base_bdevs_operational": 3, 00:15:46.494 "base_bdevs_list": [ 00:15:46.494 { 00:15:46.494 "name": "BaseBdev1", 00:15:46.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.494 "is_configured": false, 00:15:46.494 "data_offset": 0, 00:15:46.494 "data_size": 0 00:15:46.494 }, 00:15:46.494 { 00:15:46.494 "name": "BaseBdev2", 00:15:46.494 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:46.494 "is_configured": true, 00:15:46.494 "data_offset": 2048, 00:15:46.494 "data_size": 63488 00:15:46.494 }, 00:15:46.494 { 00:15:46.494 "name": "BaseBdev3", 00:15:46.494 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:46.494 "is_configured": true, 00:15:46.494 "data_offset": 2048, 00:15:46.494 "data_size": 63488 00:15:46.494 } 00:15:46.494 ] 00:15:46.494 }' 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.494 10:38:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.754 [2024-11-20 10:38:50.122273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.754 "name": "Existed_Raid", 00:15:46.754 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:46.754 "strip_size_kb": 64, 00:15:46.754 "state": "configuring", 00:15:46.754 "raid_level": "raid5f", 00:15:46.754 "superblock": true, 00:15:46.754 "num_base_bdevs": 3, 00:15:46.754 "num_base_bdevs_discovered": 1, 00:15:46.754 "num_base_bdevs_operational": 3, 00:15:46.754 "base_bdevs_list": [ 00:15:46.754 { 00:15:46.754 "name": "BaseBdev1", 00:15:46.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.754 "is_configured": false, 00:15:46.754 "data_offset": 0, 00:15:46.754 "data_size": 0 00:15:46.754 }, 00:15:46.754 { 00:15:46.754 "name": null, 00:15:46.754 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:46.754 "is_configured": false, 00:15:46.754 "data_offset": 0, 00:15:46.754 "data_size": 63488 00:15:46.754 }, 00:15:46.754 { 00:15:46.754 "name": "BaseBdev3", 00:15:46.754 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:46.754 "is_configured": true, 00:15:46.754 "data_offset": 2048, 00:15:46.754 "data_size": 63488 00:15:46.754 } 00:15:46.754 ] 00:15:46.754 }' 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.754 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.323 [2024-11-20 10:38:50.624996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.323 BaseBdev1 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.323 [ 00:15:47.323 { 00:15:47.323 "name": "BaseBdev1", 00:15:47.323 "aliases": [ 00:15:47.323 "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3" 00:15:47.323 ], 00:15:47.323 "product_name": "Malloc disk", 00:15:47.323 "block_size": 512, 00:15:47.323 "num_blocks": 65536, 00:15:47.323 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:47.323 "assigned_rate_limits": { 00:15:47.323 "rw_ios_per_sec": 0, 00:15:47.323 "rw_mbytes_per_sec": 0, 00:15:47.323 "r_mbytes_per_sec": 0, 00:15:47.323 "w_mbytes_per_sec": 0 00:15:47.323 }, 00:15:47.323 "claimed": true, 00:15:47.323 "claim_type": "exclusive_write", 00:15:47.323 "zoned": false, 00:15:47.323 "supported_io_types": { 00:15:47.323 "read": true, 00:15:47.323 "write": true, 00:15:47.323 "unmap": true, 00:15:47.323 "flush": true, 00:15:47.323 "reset": true, 00:15:47.323 "nvme_admin": false, 00:15:47.323 "nvme_io": false, 00:15:47.323 "nvme_io_md": false, 00:15:47.323 "write_zeroes": true, 00:15:47.323 "zcopy": true, 00:15:47.323 "get_zone_info": false, 00:15:47.323 "zone_management": false, 00:15:47.323 "zone_append": false, 00:15:47.323 "compare": false, 00:15:47.323 "compare_and_write": false, 00:15:47.323 "abort": true, 00:15:47.323 "seek_hole": false, 00:15:47.323 "seek_data": false, 00:15:47.323 "copy": true, 00:15:47.323 "nvme_iov_md": false 00:15:47.323 }, 00:15:47.323 "memory_domains": [ 00:15:47.323 { 00:15:47.323 "dma_device_id": "system", 00:15:47.323 "dma_device_type": 1 00:15:47.323 }, 00:15:47.323 { 00:15:47.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.323 "dma_device_type": 2 00:15:47.323 } 00:15:47.323 ], 00:15:47.323 "driver_specific": {} 00:15:47.323 } 00:15:47.323 ] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.323 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.324 "name": "Existed_Raid", 00:15:47.324 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:47.324 "strip_size_kb": 64, 00:15:47.324 "state": "configuring", 00:15:47.324 "raid_level": "raid5f", 00:15:47.324 "superblock": true, 00:15:47.324 "num_base_bdevs": 3, 00:15:47.324 "num_base_bdevs_discovered": 2, 00:15:47.324 "num_base_bdevs_operational": 3, 00:15:47.324 "base_bdevs_list": [ 00:15:47.324 { 00:15:47.324 "name": "BaseBdev1", 00:15:47.324 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:47.324 "is_configured": true, 00:15:47.324 "data_offset": 2048, 00:15:47.324 "data_size": 63488 00:15:47.324 }, 00:15:47.324 { 00:15:47.324 "name": null, 00:15:47.324 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:47.324 "is_configured": false, 00:15:47.324 "data_offset": 0, 00:15:47.324 "data_size": 63488 00:15:47.324 }, 00:15:47.324 { 00:15:47.324 "name": "BaseBdev3", 00:15:47.324 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:47.324 "is_configured": true, 00:15:47.324 "data_offset": 2048, 00:15:47.324 "data_size": 63488 00:15:47.324 } 00:15:47.324 ] 00:15:47.324 }' 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.324 10:38:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.893 [2024-11-20 10:38:51.148157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.893 "name": "Existed_Raid", 00:15:47.893 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:47.893 "strip_size_kb": 64, 00:15:47.893 "state": "configuring", 00:15:47.893 "raid_level": "raid5f", 00:15:47.893 "superblock": true, 00:15:47.893 "num_base_bdevs": 3, 00:15:47.893 "num_base_bdevs_discovered": 1, 00:15:47.893 "num_base_bdevs_operational": 3, 00:15:47.893 "base_bdevs_list": [ 00:15:47.893 { 00:15:47.893 "name": "BaseBdev1", 00:15:47.893 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:47.893 "is_configured": true, 00:15:47.893 "data_offset": 2048, 00:15:47.893 "data_size": 63488 00:15:47.893 }, 00:15:47.893 { 00:15:47.893 "name": null, 00:15:47.893 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:47.893 "is_configured": false, 00:15:47.893 "data_offset": 0, 00:15:47.893 "data_size": 63488 00:15:47.893 }, 00:15:47.893 { 00:15:47.893 "name": null, 00:15:47.893 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:47.893 "is_configured": false, 00:15:47.893 "data_offset": 0, 00:15:47.893 "data_size": 63488 00:15:47.893 } 00:15:47.893 ] 00:15:47.893 }' 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.893 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.153 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.153 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.153 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.153 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.153 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.413 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.414 [2024-11-20 10:38:51.643464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.414 "name": "Existed_Raid", 00:15:48.414 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:48.414 "strip_size_kb": 64, 00:15:48.414 "state": "configuring", 00:15:48.414 "raid_level": "raid5f", 00:15:48.414 "superblock": true, 00:15:48.414 "num_base_bdevs": 3, 00:15:48.414 "num_base_bdevs_discovered": 2, 00:15:48.414 "num_base_bdevs_operational": 3, 00:15:48.414 "base_bdevs_list": [ 00:15:48.414 { 00:15:48.414 "name": "BaseBdev1", 00:15:48.414 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:48.414 "is_configured": true, 00:15:48.414 "data_offset": 2048, 00:15:48.414 "data_size": 63488 00:15:48.414 }, 00:15:48.414 { 00:15:48.414 "name": null, 00:15:48.414 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:48.414 "is_configured": false, 00:15:48.414 "data_offset": 0, 00:15:48.414 "data_size": 63488 00:15:48.414 }, 00:15:48.414 { 00:15:48.414 "name": "BaseBdev3", 00:15:48.414 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:48.414 "is_configured": true, 00:15:48.414 "data_offset": 2048, 00:15:48.414 "data_size": 63488 00:15:48.414 } 00:15:48.414 ] 00:15:48.414 }' 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.414 10:38:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.674 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.674 [2024-11-20 10:38:52.094678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.934 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.934 "name": "Existed_Raid", 00:15:48.935 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:48.935 "strip_size_kb": 64, 00:15:48.935 "state": "configuring", 00:15:48.935 "raid_level": "raid5f", 00:15:48.935 "superblock": true, 00:15:48.935 "num_base_bdevs": 3, 00:15:48.935 "num_base_bdevs_discovered": 1, 00:15:48.935 "num_base_bdevs_operational": 3, 00:15:48.935 "base_bdevs_list": [ 00:15:48.935 { 00:15:48.935 "name": null, 00:15:48.935 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:48.935 "is_configured": false, 00:15:48.935 "data_offset": 0, 00:15:48.935 "data_size": 63488 00:15:48.935 }, 00:15:48.935 { 00:15:48.935 "name": null, 00:15:48.935 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:48.935 "is_configured": false, 00:15:48.935 "data_offset": 0, 00:15:48.935 "data_size": 63488 00:15:48.935 }, 00:15:48.935 { 00:15:48.935 "name": "BaseBdev3", 00:15:48.935 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:48.935 "is_configured": true, 00:15:48.935 "data_offset": 2048, 00:15:48.935 "data_size": 63488 00:15:48.935 } 00:15:48.935 ] 00:15:48.935 }' 00:15:48.935 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.935 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.194 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.194 [2024-11-20 10:38:52.667538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.453 "name": "Existed_Raid", 00:15:49.453 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:49.453 "strip_size_kb": 64, 00:15:49.453 "state": "configuring", 00:15:49.453 "raid_level": "raid5f", 00:15:49.453 "superblock": true, 00:15:49.453 "num_base_bdevs": 3, 00:15:49.453 "num_base_bdevs_discovered": 2, 00:15:49.453 "num_base_bdevs_operational": 3, 00:15:49.453 "base_bdevs_list": [ 00:15:49.453 { 00:15:49.453 "name": null, 00:15:49.453 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:49.453 "is_configured": false, 00:15:49.453 "data_offset": 0, 00:15:49.453 "data_size": 63488 00:15:49.453 }, 00:15:49.453 { 00:15:49.453 "name": "BaseBdev2", 00:15:49.453 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:49.453 "is_configured": true, 00:15:49.453 "data_offset": 2048, 00:15:49.453 "data_size": 63488 00:15:49.453 }, 00:15:49.453 { 00:15:49.453 "name": "BaseBdev3", 00:15:49.453 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:49.453 "is_configured": true, 00:15:49.453 "data_offset": 2048, 00:15:49.453 "data_size": 63488 00:15:49.453 } 00:15:49.453 ] 00:15:49.453 }' 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.453 10:38:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.711 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85cceb7e-6825-4d7b-986f-fc03aa4d8cf3 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.973 [2024-11-20 10:38:53.234808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:49.973 [2024-11-20 10:38:53.235146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:49.973 [2024-11-20 10:38:53.235198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:49.973 [2024-11-20 10:38:53.235482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:49.973 NewBaseBdev 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.973 [2024-11-20 10:38:53.240899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:49.973 [2024-11-20 10:38:53.240921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:49.973 [2024-11-20 10:38:53.241070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.973 [ 00:15:49.973 { 00:15:49.973 "name": "NewBaseBdev", 00:15:49.973 "aliases": [ 00:15:49.973 "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3" 00:15:49.973 ], 00:15:49.973 "product_name": "Malloc disk", 00:15:49.973 "block_size": 512, 00:15:49.973 "num_blocks": 65536, 00:15:49.973 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:49.973 "assigned_rate_limits": { 00:15:49.973 "rw_ios_per_sec": 0, 00:15:49.973 "rw_mbytes_per_sec": 0, 00:15:49.973 "r_mbytes_per_sec": 0, 00:15:49.973 "w_mbytes_per_sec": 0 00:15:49.973 }, 00:15:49.973 "claimed": true, 00:15:49.973 "claim_type": "exclusive_write", 00:15:49.973 "zoned": false, 00:15:49.973 "supported_io_types": { 00:15:49.973 "read": true, 00:15:49.973 "write": true, 00:15:49.973 "unmap": true, 00:15:49.973 "flush": true, 00:15:49.973 "reset": true, 00:15:49.973 "nvme_admin": false, 00:15:49.973 "nvme_io": false, 00:15:49.973 "nvme_io_md": false, 00:15:49.973 "write_zeroes": true, 00:15:49.973 "zcopy": true, 00:15:49.973 "get_zone_info": false, 00:15:49.973 "zone_management": false, 00:15:49.973 "zone_append": false, 00:15:49.973 "compare": false, 00:15:49.973 "compare_and_write": false, 00:15:49.973 "abort": true, 00:15:49.973 "seek_hole": false, 00:15:49.973 "seek_data": false, 00:15:49.973 "copy": true, 00:15:49.973 "nvme_iov_md": false 00:15:49.973 }, 00:15:49.973 "memory_domains": [ 00:15:49.973 { 00:15:49.973 "dma_device_id": "system", 00:15:49.973 "dma_device_type": 1 00:15:49.973 }, 00:15:49.973 { 00:15:49.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.973 "dma_device_type": 2 00:15:49.973 } 00:15:49.973 ], 00:15:49.973 "driver_specific": {} 00:15:49.973 } 00:15:49.973 ] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.973 "name": "Existed_Raid", 00:15:49.973 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:49.973 "strip_size_kb": 64, 00:15:49.973 "state": "online", 00:15:49.973 "raid_level": "raid5f", 00:15:49.973 "superblock": true, 00:15:49.973 "num_base_bdevs": 3, 00:15:49.973 "num_base_bdevs_discovered": 3, 00:15:49.973 "num_base_bdevs_operational": 3, 00:15:49.973 "base_bdevs_list": [ 00:15:49.973 { 00:15:49.973 "name": "NewBaseBdev", 00:15:49.973 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:49.973 "is_configured": true, 00:15:49.973 "data_offset": 2048, 00:15:49.973 "data_size": 63488 00:15:49.973 }, 00:15:49.973 { 00:15:49.973 "name": "BaseBdev2", 00:15:49.973 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:49.973 "is_configured": true, 00:15:49.973 "data_offset": 2048, 00:15:49.973 "data_size": 63488 00:15:49.973 }, 00:15:49.973 { 00:15:49.973 "name": "BaseBdev3", 00:15:49.973 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:49.973 "is_configured": true, 00:15:49.973 "data_offset": 2048, 00:15:49.973 "data_size": 63488 00:15:49.973 } 00:15:49.973 ] 00:15:49.973 }' 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.973 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.539 [2024-11-20 10:38:53.722795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.539 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.539 "name": "Existed_Raid", 00:15:50.539 "aliases": [ 00:15:50.539 "b7a50715-0c61-4649-91d5-31dd6cfc34d5" 00:15:50.539 ], 00:15:50.539 "product_name": "Raid Volume", 00:15:50.539 "block_size": 512, 00:15:50.539 "num_blocks": 126976, 00:15:50.539 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:50.539 "assigned_rate_limits": { 00:15:50.539 "rw_ios_per_sec": 0, 00:15:50.539 "rw_mbytes_per_sec": 0, 00:15:50.539 "r_mbytes_per_sec": 0, 00:15:50.539 "w_mbytes_per_sec": 0 00:15:50.539 }, 00:15:50.539 "claimed": false, 00:15:50.539 "zoned": false, 00:15:50.539 "supported_io_types": { 00:15:50.539 "read": true, 00:15:50.539 "write": true, 00:15:50.539 "unmap": false, 00:15:50.539 "flush": false, 00:15:50.539 "reset": true, 00:15:50.539 "nvme_admin": false, 00:15:50.539 "nvme_io": false, 00:15:50.539 "nvme_io_md": false, 00:15:50.539 "write_zeroes": true, 00:15:50.539 "zcopy": false, 00:15:50.539 "get_zone_info": false, 00:15:50.539 "zone_management": false, 00:15:50.539 "zone_append": false, 00:15:50.539 "compare": false, 00:15:50.539 "compare_and_write": false, 00:15:50.539 "abort": false, 00:15:50.539 "seek_hole": false, 00:15:50.539 "seek_data": false, 00:15:50.539 "copy": false, 00:15:50.539 "nvme_iov_md": false 00:15:50.539 }, 00:15:50.539 "driver_specific": { 00:15:50.540 "raid": { 00:15:50.540 "uuid": "b7a50715-0c61-4649-91d5-31dd6cfc34d5", 00:15:50.540 "strip_size_kb": 64, 00:15:50.540 "state": "online", 00:15:50.540 "raid_level": "raid5f", 00:15:50.540 "superblock": true, 00:15:50.540 "num_base_bdevs": 3, 00:15:50.540 "num_base_bdevs_discovered": 3, 00:15:50.540 "num_base_bdevs_operational": 3, 00:15:50.540 "base_bdevs_list": [ 00:15:50.540 { 00:15:50.540 "name": "NewBaseBdev", 00:15:50.540 "uuid": "85cceb7e-6825-4d7b-986f-fc03aa4d8cf3", 00:15:50.540 "is_configured": true, 00:15:50.540 "data_offset": 2048, 00:15:50.540 "data_size": 63488 00:15:50.540 }, 00:15:50.540 { 00:15:50.540 "name": "BaseBdev2", 00:15:50.540 "uuid": "2440a029-469b-41cb-a527-0bc07cf67376", 00:15:50.540 "is_configured": true, 00:15:50.540 "data_offset": 2048, 00:15:50.540 "data_size": 63488 00:15:50.540 }, 00:15:50.540 { 00:15:50.540 "name": "BaseBdev3", 00:15:50.540 "uuid": "3bb7c262-d6fe-45d7-8536-eef28f70e80a", 00:15:50.540 "is_configured": true, 00:15:50.540 "data_offset": 2048, 00:15:50.540 "data_size": 63488 00:15:50.540 } 00:15:50.540 ] 00:15:50.540 } 00:15:50.540 } 00:15:50.540 }' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:50.540 BaseBdev2 00:15:50.540 BaseBdev3' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 [2024-11-20 10:38:53.982150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.540 [2024-11-20 10:38:53.982176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.540 [2024-11-20 10:38:53.982256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.540 [2024-11-20 10:38:53.982542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.540 [2024-11-20 10:38:53.982556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80677 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80677 ']' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80677 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.540 10:38:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80677 00:15:50.800 10:38:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.800 10:38:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.800 10:38:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80677' 00:15:50.800 killing process with pid 80677 00:15:50.800 10:38:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80677 00:15:50.800 [2024-11-20 10:38:54.029157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.800 10:38:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80677 00:15:51.060 [2024-11-20 10:38:54.326978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.995 10:38:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:51.995 00:15:51.995 real 0m10.495s 00:15:51.995 user 0m16.654s 00:15:51.995 sys 0m1.932s 00:15:51.995 10:38:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.995 ************************************ 00:15:51.995 END TEST raid5f_state_function_test_sb 00:15:51.995 ************************************ 00:15:51.995 10:38:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.254 10:38:55 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:52.254 10:38:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:52.254 10:38:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.254 10:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.254 ************************************ 00:15:52.254 START TEST raid5f_superblock_test 00:15:52.254 ************************************ 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81298 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81298 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81298 ']' 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.254 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.255 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.255 10:38:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.255 [2024-11-20 10:38:55.579594] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:15:52.255 [2024-11-20 10:38:55.579796] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81298 ] 00:15:52.514 [2024-11-20 10:38:55.755982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.514 [2024-11-20 10:38:55.871348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.773 [2024-11-20 10:38:56.075879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.773 [2024-11-20 10:38:56.075982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.032 malloc1 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.032 [2024-11-20 10:38:56.462215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.032 [2024-11-20 10:38:56.462341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.032 [2024-11-20 10:38:56.462398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.032 [2024-11-20 10:38:56.462431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.032 [2024-11-20 10:38:56.464607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.032 [2024-11-20 10:38:56.464680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.032 pt1 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.032 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.291 malloc2 00:15:53.291 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.291 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.291 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.291 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.291 [2024-11-20 10:38:56.519719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.291 [2024-11-20 10:38:56.519784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.292 [2024-11-20 10:38:56.519826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.292 [2024-11-20 10:38:56.519834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.292 [2024-11-20 10:38:56.521966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.292 [2024-11-20 10:38:56.522003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.292 pt2 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.292 malloc3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.292 [2024-11-20 10:38:56.584800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:53.292 [2024-11-20 10:38:56.584909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.292 [2024-11-20 10:38:56.584949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:53.292 [2024-11-20 10:38:56.584977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.292 [2024-11-20 10:38:56.587104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.292 [2024-11-20 10:38:56.587178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:53.292 pt3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.292 [2024-11-20 10:38:56.596841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.292 [2024-11-20 10:38:56.598605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.292 [2024-11-20 10:38:56.598711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:53.292 [2024-11-20 10:38:56.598901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:53.292 [2024-11-20 10:38:56.598949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:53.292 [2024-11-20 10:38:56.599215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:53.292 [2024-11-20 10:38:56.604620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:53.292 [2024-11-20 10:38:56.604705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:53.292 [2024-11-20 10:38:56.605007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.292 "name": "raid_bdev1", 00:15:53.292 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:53.292 "strip_size_kb": 64, 00:15:53.292 "state": "online", 00:15:53.292 "raid_level": "raid5f", 00:15:53.292 "superblock": true, 00:15:53.292 "num_base_bdevs": 3, 00:15:53.292 "num_base_bdevs_discovered": 3, 00:15:53.292 "num_base_bdevs_operational": 3, 00:15:53.292 "base_bdevs_list": [ 00:15:53.292 { 00:15:53.292 "name": "pt1", 00:15:53.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.292 "is_configured": true, 00:15:53.292 "data_offset": 2048, 00:15:53.292 "data_size": 63488 00:15:53.292 }, 00:15:53.292 { 00:15:53.292 "name": "pt2", 00:15:53.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.292 "is_configured": true, 00:15:53.292 "data_offset": 2048, 00:15:53.292 "data_size": 63488 00:15:53.292 }, 00:15:53.292 { 00:15:53.292 "name": "pt3", 00:15:53.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.292 "is_configured": true, 00:15:53.292 "data_offset": 2048, 00:15:53.292 "data_size": 63488 00:15:53.292 } 00:15:53.292 ] 00:15:53.292 }' 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.292 10:38:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.861 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:53.861 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:53.861 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.861 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.862 [2024-11-20 10:38:57.059239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.862 "name": "raid_bdev1", 00:15:53.862 "aliases": [ 00:15:53.862 "ba3b6eef-addf-4302-9ec6-37b104071739" 00:15:53.862 ], 00:15:53.862 "product_name": "Raid Volume", 00:15:53.862 "block_size": 512, 00:15:53.862 "num_blocks": 126976, 00:15:53.862 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:53.862 "assigned_rate_limits": { 00:15:53.862 "rw_ios_per_sec": 0, 00:15:53.862 "rw_mbytes_per_sec": 0, 00:15:53.862 "r_mbytes_per_sec": 0, 00:15:53.862 "w_mbytes_per_sec": 0 00:15:53.862 }, 00:15:53.862 "claimed": false, 00:15:53.862 "zoned": false, 00:15:53.862 "supported_io_types": { 00:15:53.862 "read": true, 00:15:53.862 "write": true, 00:15:53.862 "unmap": false, 00:15:53.862 "flush": false, 00:15:53.862 "reset": true, 00:15:53.862 "nvme_admin": false, 00:15:53.862 "nvme_io": false, 00:15:53.862 "nvme_io_md": false, 00:15:53.862 "write_zeroes": true, 00:15:53.862 "zcopy": false, 00:15:53.862 "get_zone_info": false, 00:15:53.862 "zone_management": false, 00:15:53.862 "zone_append": false, 00:15:53.862 "compare": false, 00:15:53.862 "compare_and_write": false, 00:15:53.862 "abort": false, 00:15:53.862 "seek_hole": false, 00:15:53.862 "seek_data": false, 00:15:53.862 "copy": false, 00:15:53.862 "nvme_iov_md": false 00:15:53.862 }, 00:15:53.862 "driver_specific": { 00:15:53.862 "raid": { 00:15:53.862 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:53.862 "strip_size_kb": 64, 00:15:53.862 "state": "online", 00:15:53.862 "raid_level": "raid5f", 00:15:53.862 "superblock": true, 00:15:53.862 "num_base_bdevs": 3, 00:15:53.862 "num_base_bdevs_discovered": 3, 00:15:53.862 "num_base_bdevs_operational": 3, 00:15:53.862 "base_bdevs_list": [ 00:15:53.862 { 00:15:53.862 "name": "pt1", 00:15:53.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.862 "is_configured": true, 00:15:53.862 "data_offset": 2048, 00:15:53.862 "data_size": 63488 00:15:53.862 }, 00:15:53.862 { 00:15:53.862 "name": "pt2", 00:15:53.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.862 "is_configured": true, 00:15:53.862 "data_offset": 2048, 00:15:53.862 "data_size": 63488 00:15:53.862 }, 00:15:53.862 { 00:15:53.862 "name": "pt3", 00:15:53.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.862 "is_configured": true, 00:15:53.862 "data_offset": 2048, 00:15:53.862 "data_size": 63488 00:15:53.862 } 00:15:53.862 ] 00:15:53.862 } 00:15:53.862 } 00:15:53.862 }' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:53.862 pt2 00:15:53.862 pt3' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.862 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 [2024-11-20 10:38:57.358694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ba3b6eef-addf-4302-9ec6-37b104071739 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ba3b6eef-addf-4302-9ec6-37b104071739 ']' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 [2024-11-20 10:38:57.406427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.122 [2024-11-20 10:38:57.406515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.122 [2024-11-20 10:38:57.406623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.122 [2024-11-20 10:38:57.406743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.122 [2024-11-20 10:38:57.406796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.122 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.123 [2024-11-20 10:38:57.546247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:54.123 [2024-11-20 10:38:57.548323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:54.123 [2024-11-20 10:38:57.548466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:54.123 [2024-11-20 10:38:57.548549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:54.123 [2024-11-20 10:38:57.548651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:54.123 [2024-11-20 10:38:57.548715] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:54.123 [2024-11-20 10:38:57.548772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.123 [2024-11-20 10:38:57.548814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:54.123 request: 00:15:54.123 { 00:15:54.123 "name": "raid_bdev1", 00:15:54.123 "raid_level": "raid5f", 00:15:54.123 "base_bdevs": [ 00:15:54.123 "malloc1", 00:15:54.123 "malloc2", 00:15:54.123 "malloc3" 00:15:54.123 ], 00:15:54.123 "strip_size_kb": 64, 00:15:54.123 "superblock": false, 00:15:54.123 "method": "bdev_raid_create", 00:15:54.123 "req_id": 1 00:15:54.123 } 00:15:54.123 Got JSON-RPC error response 00:15:54.123 response: 00:15:54.123 { 00:15:54.123 "code": -17, 00:15:54.123 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:54.123 } 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:54.123 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 [2024-11-20 10:38:57.614077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.382 [2024-11-20 10:38:57.614212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.382 [2024-11-20 10:38:57.614252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:54.382 [2024-11-20 10:38:57.614280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.382 [2024-11-20 10:38:57.616668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.382 [2024-11-20 10:38:57.616746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.382 [2024-11-20 10:38:57.616855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:54.382 [2024-11-20 10:38:57.616933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.382 pt1 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.382 "name": "raid_bdev1", 00:15:54.382 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:54.382 "strip_size_kb": 64, 00:15:54.382 "state": "configuring", 00:15:54.382 "raid_level": "raid5f", 00:15:54.382 "superblock": true, 00:15:54.382 "num_base_bdevs": 3, 00:15:54.382 "num_base_bdevs_discovered": 1, 00:15:54.382 "num_base_bdevs_operational": 3, 00:15:54.382 "base_bdevs_list": [ 00:15:54.382 { 00:15:54.382 "name": "pt1", 00:15:54.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.382 "is_configured": true, 00:15:54.382 "data_offset": 2048, 00:15:54.382 "data_size": 63488 00:15:54.382 }, 00:15:54.382 { 00:15:54.382 "name": null, 00:15:54.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.382 "is_configured": false, 00:15:54.382 "data_offset": 2048, 00:15:54.382 "data_size": 63488 00:15:54.382 }, 00:15:54.382 { 00:15:54.382 "name": null, 00:15:54.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.382 "is_configured": false, 00:15:54.382 "data_offset": 2048, 00:15:54.382 "data_size": 63488 00:15:54.382 } 00:15:54.382 ] 00:15:54.382 }' 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.382 10:38:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.641 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:54.641 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.641 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.641 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.641 [2024-11-20 10:38:58.097258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.642 [2024-11-20 10:38:58.097444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.642 [2024-11-20 10:38:58.097477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:54.642 [2024-11-20 10:38:58.097488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.642 [2024-11-20 10:38:58.097974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.642 [2024-11-20 10:38:58.098010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.642 [2024-11-20 10:38:58.098109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:54.642 [2024-11-20 10:38:58.098133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.642 pt2 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.642 [2024-11-20 10:38:58.109244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.642 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.901 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.901 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.901 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.901 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.901 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.901 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.901 "name": "raid_bdev1", 00:15:54.901 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:54.901 "strip_size_kb": 64, 00:15:54.901 "state": "configuring", 00:15:54.901 "raid_level": "raid5f", 00:15:54.901 "superblock": true, 00:15:54.901 "num_base_bdevs": 3, 00:15:54.901 "num_base_bdevs_discovered": 1, 00:15:54.901 "num_base_bdevs_operational": 3, 00:15:54.901 "base_bdevs_list": [ 00:15:54.901 { 00:15:54.901 "name": "pt1", 00:15:54.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.901 "is_configured": true, 00:15:54.901 "data_offset": 2048, 00:15:54.901 "data_size": 63488 00:15:54.902 }, 00:15:54.902 { 00:15:54.902 "name": null, 00:15:54.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.902 "is_configured": false, 00:15:54.902 "data_offset": 0, 00:15:54.902 "data_size": 63488 00:15:54.902 }, 00:15:54.902 { 00:15:54.902 "name": null, 00:15:54.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.902 "is_configured": false, 00:15:54.902 "data_offset": 2048, 00:15:54.902 "data_size": 63488 00:15:54.902 } 00:15:54.902 ] 00:15:54.902 }' 00:15:54.902 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.902 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.162 [2024-11-20 10:38:58.536500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.162 [2024-11-20 10:38:58.536652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.162 [2024-11-20 10:38:58.536690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:55.162 [2024-11-20 10:38:58.536721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.162 [2024-11-20 10:38:58.537266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.162 [2024-11-20 10:38:58.537331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.162 [2024-11-20 10:38:58.537463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:55.162 [2024-11-20 10:38:58.537520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.162 pt2 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.162 [2024-11-20 10:38:58.548488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.162 [2024-11-20 10:38:58.548587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.162 [2024-11-20 10:38:58.548620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:55.162 [2024-11-20 10:38:58.548648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.162 [2024-11-20 10:38:58.549131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.162 [2024-11-20 10:38:58.549197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.162 [2024-11-20 10:38:58.549301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:55.162 [2024-11-20 10:38:58.549365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.162 [2024-11-20 10:38:58.549539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:55.162 [2024-11-20 10:38:58.549584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:55.162 [2024-11-20 10:38:58.549868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.162 [2024-11-20 10:38:58.555684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:55.162 [2024-11-20 10:38:58.555742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:55.162 [2024-11-20 10:38:58.555991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.162 pt3 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.162 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.163 "name": "raid_bdev1", 00:15:55.163 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:55.163 "strip_size_kb": 64, 00:15:55.163 "state": "online", 00:15:55.163 "raid_level": "raid5f", 00:15:55.163 "superblock": true, 00:15:55.163 "num_base_bdevs": 3, 00:15:55.163 "num_base_bdevs_discovered": 3, 00:15:55.163 "num_base_bdevs_operational": 3, 00:15:55.163 "base_bdevs_list": [ 00:15:55.163 { 00:15:55.163 "name": "pt1", 00:15:55.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.163 "is_configured": true, 00:15:55.163 "data_offset": 2048, 00:15:55.163 "data_size": 63488 00:15:55.163 }, 00:15:55.163 { 00:15:55.163 "name": "pt2", 00:15:55.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.163 "is_configured": true, 00:15:55.163 "data_offset": 2048, 00:15:55.163 "data_size": 63488 00:15:55.163 }, 00:15:55.163 { 00:15:55.163 "name": "pt3", 00:15:55.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.163 "is_configured": true, 00:15:55.163 "data_offset": 2048, 00:15:55.163 "data_size": 63488 00:15:55.163 } 00:15:55.163 ] 00:15:55.163 }' 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.163 10:38:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.732 [2024-11-20 10:38:59.022118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.732 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.732 "name": "raid_bdev1", 00:15:55.732 "aliases": [ 00:15:55.732 "ba3b6eef-addf-4302-9ec6-37b104071739" 00:15:55.732 ], 00:15:55.732 "product_name": "Raid Volume", 00:15:55.732 "block_size": 512, 00:15:55.732 "num_blocks": 126976, 00:15:55.732 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:55.732 "assigned_rate_limits": { 00:15:55.732 "rw_ios_per_sec": 0, 00:15:55.732 "rw_mbytes_per_sec": 0, 00:15:55.732 "r_mbytes_per_sec": 0, 00:15:55.732 "w_mbytes_per_sec": 0 00:15:55.732 }, 00:15:55.732 "claimed": false, 00:15:55.732 "zoned": false, 00:15:55.732 "supported_io_types": { 00:15:55.732 "read": true, 00:15:55.732 "write": true, 00:15:55.732 "unmap": false, 00:15:55.732 "flush": false, 00:15:55.732 "reset": true, 00:15:55.732 "nvme_admin": false, 00:15:55.732 "nvme_io": false, 00:15:55.732 "nvme_io_md": false, 00:15:55.732 "write_zeroes": true, 00:15:55.732 "zcopy": false, 00:15:55.732 "get_zone_info": false, 00:15:55.732 "zone_management": false, 00:15:55.732 "zone_append": false, 00:15:55.732 "compare": false, 00:15:55.732 "compare_and_write": false, 00:15:55.732 "abort": false, 00:15:55.732 "seek_hole": false, 00:15:55.732 "seek_data": false, 00:15:55.732 "copy": false, 00:15:55.732 "nvme_iov_md": false 00:15:55.732 }, 00:15:55.732 "driver_specific": { 00:15:55.732 "raid": { 00:15:55.732 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:55.732 "strip_size_kb": 64, 00:15:55.732 "state": "online", 00:15:55.732 "raid_level": "raid5f", 00:15:55.732 "superblock": true, 00:15:55.732 "num_base_bdevs": 3, 00:15:55.732 "num_base_bdevs_discovered": 3, 00:15:55.732 "num_base_bdevs_operational": 3, 00:15:55.732 "base_bdevs_list": [ 00:15:55.732 { 00:15:55.732 "name": "pt1", 00:15:55.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.732 "is_configured": true, 00:15:55.732 "data_offset": 2048, 00:15:55.732 "data_size": 63488 00:15:55.732 }, 00:15:55.732 { 00:15:55.732 "name": "pt2", 00:15:55.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.732 "is_configured": true, 00:15:55.732 "data_offset": 2048, 00:15:55.732 "data_size": 63488 00:15:55.732 }, 00:15:55.732 { 00:15:55.732 "name": "pt3", 00:15:55.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.733 "is_configured": true, 00:15:55.733 "data_offset": 2048, 00:15:55.733 "data_size": 63488 00:15:55.733 } 00:15:55.733 ] 00:15:55.733 } 00:15:55.733 } 00:15:55.733 }' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:55.733 pt2 00:15:55.733 pt3' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.733 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.992 [2024-11-20 10:38:59.305666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ba3b6eef-addf-4302-9ec6-37b104071739 '!=' ba3b6eef-addf-4302-9ec6-37b104071739 ']' 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.992 [2024-11-20 10:38:59.337483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.992 "name": "raid_bdev1", 00:15:55.992 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:55.992 "strip_size_kb": 64, 00:15:55.992 "state": "online", 00:15:55.992 "raid_level": "raid5f", 00:15:55.992 "superblock": true, 00:15:55.992 "num_base_bdevs": 3, 00:15:55.992 "num_base_bdevs_discovered": 2, 00:15:55.992 "num_base_bdevs_operational": 2, 00:15:55.992 "base_bdevs_list": [ 00:15:55.992 { 00:15:55.992 "name": null, 00:15:55.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.992 "is_configured": false, 00:15:55.992 "data_offset": 0, 00:15:55.992 "data_size": 63488 00:15:55.992 }, 00:15:55.992 { 00:15:55.992 "name": "pt2", 00:15:55.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.992 "is_configured": true, 00:15:55.992 "data_offset": 2048, 00:15:55.992 "data_size": 63488 00:15:55.992 }, 00:15:55.992 { 00:15:55.992 "name": "pt3", 00:15:55.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.992 "is_configured": true, 00:15:55.992 "data_offset": 2048, 00:15:55.992 "data_size": 63488 00:15:55.992 } 00:15:55.992 ] 00:15:55.992 }' 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.992 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 [2024-11-20 10:38:59.792590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.561 [2024-11-20 10:38:59.792684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.561 [2024-11-20 10:38:59.792784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.561 [2024-11-20 10:38:59.792880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.561 [2024-11-20 10:38:59.792943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 [2024-11-20 10:38:59.864461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.561 [2024-11-20 10:38:59.864525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.561 [2024-11-20 10:38:59.864543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:56.561 [2024-11-20 10:38:59.864553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.561 [2024-11-20 10:38:59.866859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.561 [2024-11-20 10:38:59.866901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.561 [2024-11-20 10:38:59.866989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.561 [2024-11-20 10:38:59.867040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.561 pt2 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.561 "name": "raid_bdev1", 00:15:56.561 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:56.561 "strip_size_kb": 64, 00:15:56.561 "state": "configuring", 00:15:56.561 "raid_level": "raid5f", 00:15:56.561 "superblock": true, 00:15:56.561 "num_base_bdevs": 3, 00:15:56.561 "num_base_bdevs_discovered": 1, 00:15:56.561 "num_base_bdevs_operational": 2, 00:15:56.561 "base_bdevs_list": [ 00:15:56.561 { 00:15:56.561 "name": null, 00:15:56.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.561 "is_configured": false, 00:15:56.561 "data_offset": 2048, 00:15:56.561 "data_size": 63488 00:15:56.561 }, 00:15:56.561 { 00:15:56.561 "name": "pt2", 00:15:56.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.561 "is_configured": true, 00:15:56.561 "data_offset": 2048, 00:15:56.561 "data_size": 63488 00:15:56.561 }, 00:15:56.561 { 00:15:56.561 "name": null, 00:15:56.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.561 "is_configured": false, 00:15:56.561 "data_offset": 2048, 00:15:56.561 "data_size": 63488 00:15:56.561 } 00:15:56.561 ] 00:15:56.561 }' 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.561 10:38:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.130 [2024-11-20 10:39:00.319705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.130 [2024-11-20 10:39:00.319780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.130 [2024-11-20 10:39:00.319806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:57.130 [2024-11-20 10:39:00.319819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.130 [2024-11-20 10:39:00.320321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.130 [2024-11-20 10:39:00.320345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.130 [2024-11-20 10:39:00.320450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:57.130 [2024-11-20 10:39:00.320493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.130 [2024-11-20 10:39:00.320709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:57.130 [2024-11-20 10:39:00.320725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:57.130 [2024-11-20 10:39:00.321011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:57.130 [2024-11-20 10:39:00.327455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:57.130 [2024-11-20 10:39:00.327524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:57.130 [2024-11-20 10:39:00.327904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.130 pt3 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.130 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.130 "name": "raid_bdev1", 00:15:57.130 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:57.130 "strip_size_kb": 64, 00:15:57.130 "state": "online", 00:15:57.130 "raid_level": "raid5f", 00:15:57.130 "superblock": true, 00:15:57.130 "num_base_bdevs": 3, 00:15:57.131 "num_base_bdevs_discovered": 2, 00:15:57.131 "num_base_bdevs_operational": 2, 00:15:57.131 "base_bdevs_list": [ 00:15:57.131 { 00:15:57.131 "name": null, 00:15:57.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.131 "is_configured": false, 00:15:57.131 "data_offset": 2048, 00:15:57.131 "data_size": 63488 00:15:57.131 }, 00:15:57.131 { 00:15:57.131 "name": "pt2", 00:15:57.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.131 "is_configured": true, 00:15:57.131 "data_offset": 2048, 00:15:57.131 "data_size": 63488 00:15:57.131 }, 00:15:57.131 { 00:15:57.131 "name": "pt3", 00:15:57.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.131 "is_configured": true, 00:15:57.131 "data_offset": 2048, 00:15:57.131 "data_size": 63488 00:15:57.131 } 00:15:57.131 ] 00:15:57.131 }' 00:15:57.131 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.131 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.390 [2024-11-20 10:39:00.827532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.390 [2024-11-20 10:39:00.827638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.390 [2024-11-20 10:39:00.827748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.390 [2024-11-20 10:39:00.827859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.390 [2024-11-20 10:39:00.827918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.390 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 [2024-11-20 10:39:00.887514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.649 [2024-11-20 10:39:00.887585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.649 [2024-11-20 10:39:00.887606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:57.649 [2024-11-20 10:39:00.887616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.649 [2024-11-20 10:39:00.890219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.649 [2024-11-20 10:39:00.890262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.649 [2024-11-20 10:39:00.890365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.649 [2024-11-20 10:39:00.890422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.649 [2024-11-20 10:39:00.890566] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:57.649 [2024-11-20 10:39:00.890584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.649 [2024-11-20 10:39:00.890602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:57.649 [2024-11-20 10:39:00.890725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.649 pt1 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.649 "name": "raid_bdev1", 00:15:57.649 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:57.649 "strip_size_kb": 64, 00:15:57.649 "state": "configuring", 00:15:57.649 "raid_level": "raid5f", 00:15:57.649 "superblock": true, 00:15:57.649 "num_base_bdevs": 3, 00:15:57.649 "num_base_bdevs_discovered": 1, 00:15:57.649 "num_base_bdevs_operational": 2, 00:15:57.649 "base_bdevs_list": [ 00:15:57.649 { 00:15:57.649 "name": null, 00:15:57.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.649 "is_configured": false, 00:15:57.649 "data_offset": 2048, 00:15:57.649 "data_size": 63488 00:15:57.649 }, 00:15:57.649 { 00:15:57.649 "name": "pt2", 00:15:57.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.649 "is_configured": true, 00:15:57.649 "data_offset": 2048, 00:15:57.649 "data_size": 63488 00:15:57.649 }, 00:15:57.649 { 00:15:57.649 "name": null, 00:15:57.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.649 "is_configured": false, 00:15:57.649 "data_offset": 2048, 00:15:57.649 "data_size": 63488 00:15:57.649 } 00:15:57.649 ] 00:15:57.649 }' 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.649 10:39:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.908 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:57.908 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:57.908 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.908 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.908 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.168 [2024-11-20 10:39:01.406618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.168 [2024-11-20 10:39:01.406755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.168 [2024-11-20 10:39:01.406802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:58.168 [2024-11-20 10:39:01.406835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.168 [2024-11-20 10:39:01.407460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.168 [2024-11-20 10:39:01.407532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.168 [2024-11-20 10:39:01.407666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:58.168 [2024-11-20 10:39:01.407728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.168 [2024-11-20 10:39:01.407919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:58.168 [2024-11-20 10:39:01.407965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:58.168 [2024-11-20 10:39:01.408294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:58.168 [2024-11-20 10:39:01.415037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:58.168 [2024-11-20 10:39:01.415105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:58.168 [2024-11-20 10:39:01.415439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.168 pt3 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.168 "name": "raid_bdev1", 00:15:58.168 "uuid": "ba3b6eef-addf-4302-9ec6-37b104071739", 00:15:58.168 "strip_size_kb": 64, 00:15:58.168 "state": "online", 00:15:58.168 "raid_level": "raid5f", 00:15:58.168 "superblock": true, 00:15:58.168 "num_base_bdevs": 3, 00:15:58.168 "num_base_bdevs_discovered": 2, 00:15:58.168 "num_base_bdevs_operational": 2, 00:15:58.168 "base_bdevs_list": [ 00:15:58.168 { 00:15:58.168 "name": null, 00:15:58.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.168 "is_configured": false, 00:15:58.168 "data_offset": 2048, 00:15:58.168 "data_size": 63488 00:15:58.168 }, 00:15:58.168 { 00:15:58.168 "name": "pt2", 00:15:58.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.168 "is_configured": true, 00:15:58.168 "data_offset": 2048, 00:15:58.168 "data_size": 63488 00:15:58.168 }, 00:15:58.168 { 00:15:58.168 "name": "pt3", 00:15:58.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.168 "is_configured": true, 00:15:58.168 "data_offset": 2048, 00:15:58.168 "data_size": 63488 00:15:58.168 } 00:15:58.168 ] 00:15:58.168 }' 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.168 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.426 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:58.426 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.426 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.426 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:58.426 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.683 [2024-11-20 10:39:01.918430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ba3b6eef-addf-4302-9ec6-37b104071739 '!=' ba3b6eef-addf-4302-9ec6-37b104071739 ']' 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81298 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81298 ']' 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81298 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81298 00:15:58.683 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.684 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.684 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81298' 00:15:58.684 killing process with pid 81298 00:15:58.684 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81298 00:15:58.684 [2024-11-20 10:39:01.999229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.684 [2024-11-20 10:39:01.999421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.684 10:39:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81298 00:15:58.684 [2024-11-20 10:39:01.999537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.684 [2024-11-20 10:39:01.999554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:58.941 [2024-11-20 10:39:02.297855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.315 10:39:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:00.315 00:16:00.315 real 0m7.903s 00:16:00.315 user 0m12.390s 00:16:00.315 sys 0m1.442s 00:16:00.315 10:39:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.315 10:39:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 ************************************ 00:16:00.315 END TEST raid5f_superblock_test 00:16:00.315 ************************************ 00:16:00.315 10:39:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:00.315 10:39:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:00.315 10:39:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:00.315 10:39:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.315 10:39:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 ************************************ 00:16:00.315 START TEST raid5f_rebuild_test 00:16:00.315 ************************************ 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81736 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81736 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81736 ']' 00:16:00.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.315 10:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:00.315 Zero copy mechanism will not be used. 00:16:00.315 [2024-11-20 10:39:03.562858] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:00.315 [2024-11-20 10:39:03.562979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81736 ] 00:16:00.315 [2024-11-20 10:39:03.735889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.574 [2024-11-20 10:39:03.851716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.832 [2024-11-20 10:39:04.060092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.832 [2024-11-20 10:39:04.060152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 BaseBdev1_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 [2024-11-20 10:39:04.438315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:01.091 [2024-11-20 10:39:04.438484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.091 [2024-11-20 10:39:04.438520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.091 [2024-11-20 10:39:04.438534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.091 [2024-11-20 10:39:04.440881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.091 [2024-11-20 10:39:04.440920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:01.091 BaseBdev1 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 BaseBdev2_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 [2024-11-20 10:39:04.491652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:01.091 [2024-11-20 10:39:04.491714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.091 [2024-11-20 10:39:04.491750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.091 [2024-11-20 10:39:04.491763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.091 [2024-11-20 10:39:04.493908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.091 [2024-11-20 10:39:04.493982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:01.091 BaseBdev2 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 BaseBdev3_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 [2024-11-20 10:39:04.560820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:01.091 [2024-11-20 10:39:04.560879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.091 [2024-11-20 10:39:04.560900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:01.091 [2024-11-20 10:39:04.560911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.091 [2024-11-20 10:39:04.562977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.091 [2024-11-20 10:39:04.563017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:01.091 BaseBdev3 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.091 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.350 spare_malloc 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.350 spare_delay 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.350 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.351 [2024-11-20 10:39:04.629736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.351 [2024-11-20 10:39:04.629840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.351 [2024-11-20 10:39:04.629862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:01.351 [2024-11-20 10:39:04.629873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.351 [2024-11-20 10:39:04.632001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.351 [2024-11-20 10:39:04.632045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.351 spare 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.351 [2024-11-20 10:39:04.641783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.351 [2024-11-20 10:39:04.643675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.351 [2024-11-20 10:39:04.643744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.351 [2024-11-20 10:39:04.643837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:01.351 [2024-11-20 10:39:04.643849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:01.351 [2024-11-20 10:39:04.644152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:01.351 [2024-11-20 10:39:04.650131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:01.351 [2024-11-20 10:39:04.650155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:01.351 [2024-11-20 10:39:04.650343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.351 "name": "raid_bdev1", 00:16:01.351 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:01.351 "strip_size_kb": 64, 00:16:01.351 "state": "online", 00:16:01.351 "raid_level": "raid5f", 00:16:01.351 "superblock": false, 00:16:01.351 "num_base_bdevs": 3, 00:16:01.351 "num_base_bdevs_discovered": 3, 00:16:01.351 "num_base_bdevs_operational": 3, 00:16:01.351 "base_bdevs_list": [ 00:16:01.351 { 00:16:01.351 "name": "BaseBdev1", 00:16:01.351 "uuid": "87017a67-d271-5f52-9ae4-1f8dcb11fa16", 00:16:01.351 "is_configured": true, 00:16:01.351 "data_offset": 0, 00:16:01.351 "data_size": 65536 00:16:01.351 }, 00:16:01.351 { 00:16:01.351 "name": "BaseBdev2", 00:16:01.351 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:01.351 "is_configured": true, 00:16:01.351 "data_offset": 0, 00:16:01.351 "data_size": 65536 00:16:01.351 }, 00:16:01.351 { 00:16:01.351 "name": "BaseBdev3", 00:16:01.351 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:01.351 "is_configured": true, 00:16:01.351 "data_offset": 0, 00:16:01.351 "data_size": 65536 00:16:01.351 } 00:16:01.351 ] 00:16:01.351 }' 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.351 10:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.919 [2024-11-20 10:39:05.097192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:01.919 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.920 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:01.920 [2024-11-20 10:39:05.368564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:01.920 /dev/nbd0 00:16:02.178 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.178 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.178 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.179 1+0 records in 00:16:02.179 1+0 records out 00:16:02.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515513 s, 7.9 MB/s 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:02.179 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:02.437 512+0 records in 00:16:02.437 512+0 records out 00:16:02.437 67108864 bytes (67 MB, 64 MiB) copied, 0.353139 s, 190 MB/s 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.437 10:39:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:02.695 [2024-11-20 10:39:05.985971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:02.695 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.696 [2024-11-20 10:39:06.033653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.696 "name": "raid_bdev1", 00:16:02.696 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:02.696 "strip_size_kb": 64, 00:16:02.696 "state": "online", 00:16:02.696 "raid_level": "raid5f", 00:16:02.696 "superblock": false, 00:16:02.696 "num_base_bdevs": 3, 00:16:02.696 "num_base_bdevs_discovered": 2, 00:16:02.696 "num_base_bdevs_operational": 2, 00:16:02.696 "base_bdevs_list": [ 00:16:02.696 { 00:16:02.696 "name": null, 00:16:02.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.696 "is_configured": false, 00:16:02.696 "data_offset": 0, 00:16:02.696 "data_size": 65536 00:16:02.696 }, 00:16:02.696 { 00:16:02.696 "name": "BaseBdev2", 00:16:02.696 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:02.696 "is_configured": true, 00:16:02.696 "data_offset": 0, 00:16:02.696 "data_size": 65536 00:16:02.696 }, 00:16:02.696 { 00:16:02.696 "name": "BaseBdev3", 00:16:02.696 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:02.696 "is_configured": true, 00:16:02.696 "data_offset": 0, 00:16:02.696 "data_size": 65536 00:16:02.696 } 00:16:02.696 ] 00:16:02.696 }' 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.696 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.263 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.263 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.263 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.263 [2024-11-20 10:39:06.464966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.264 [2024-11-20 10:39:06.483103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:03.264 10:39:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.264 10:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:03.264 [2024-11-20 10:39:06.491516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.199 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.199 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.200 "name": "raid_bdev1", 00:16:04.200 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:04.200 "strip_size_kb": 64, 00:16:04.200 "state": "online", 00:16:04.200 "raid_level": "raid5f", 00:16:04.200 "superblock": false, 00:16:04.200 "num_base_bdevs": 3, 00:16:04.200 "num_base_bdevs_discovered": 3, 00:16:04.200 "num_base_bdevs_operational": 3, 00:16:04.200 "process": { 00:16:04.200 "type": "rebuild", 00:16:04.200 "target": "spare", 00:16:04.200 "progress": { 00:16:04.200 "blocks": 20480, 00:16:04.200 "percent": 15 00:16:04.200 } 00:16:04.200 }, 00:16:04.200 "base_bdevs_list": [ 00:16:04.200 { 00:16:04.200 "name": "spare", 00:16:04.200 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:04.200 "is_configured": true, 00:16:04.200 "data_offset": 0, 00:16:04.200 "data_size": 65536 00:16:04.200 }, 00:16:04.200 { 00:16:04.200 "name": "BaseBdev2", 00:16:04.200 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:04.200 "is_configured": true, 00:16:04.200 "data_offset": 0, 00:16:04.200 "data_size": 65536 00:16:04.200 }, 00:16:04.200 { 00:16:04.200 "name": "BaseBdev3", 00:16:04.200 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:04.200 "is_configured": true, 00:16:04.200 "data_offset": 0, 00:16:04.200 "data_size": 65536 00:16:04.200 } 00:16:04.200 ] 00:16:04.200 }' 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.200 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.200 [2024-11-20 10:39:07.646651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.459 [2024-11-20 10:39:07.700811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.459 [2024-11-20 10:39:07.700943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.459 [2024-11-20 10:39:07.700985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.459 [2024-11-20 10:39:07.701007] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.459 "name": "raid_bdev1", 00:16:04.459 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:04.459 "strip_size_kb": 64, 00:16:04.459 "state": "online", 00:16:04.459 "raid_level": "raid5f", 00:16:04.459 "superblock": false, 00:16:04.459 "num_base_bdevs": 3, 00:16:04.459 "num_base_bdevs_discovered": 2, 00:16:04.459 "num_base_bdevs_operational": 2, 00:16:04.459 "base_bdevs_list": [ 00:16:04.459 { 00:16:04.459 "name": null, 00:16:04.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.459 "is_configured": false, 00:16:04.459 "data_offset": 0, 00:16:04.459 "data_size": 65536 00:16:04.459 }, 00:16:04.459 { 00:16:04.459 "name": "BaseBdev2", 00:16:04.459 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:04.459 "is_configured": true, 00:16:04.459 "data_offset": 0, 00:16:04.459 "data_size": 65536 00:16:04.459 }, 00:16:04.459 { 00:16:04.459 "name": "BaseBdev3", 00:16:04.459 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:04.459 "is_configured": true, 00:16:04.459 "data_offset": 0, 00:16:04.459 "data_size": 65536 00:16:04.459 } 00:16:04.459 ] 00:16:04.459 }' 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.459 10:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.718 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.977 10:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.977 10:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.977 10:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.977 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.977 "name": "raid_bdev1", 00:16:04.977 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:04.977 "strip_size_kb": 64, 00:16:04.977 "state": "online", 00:16:04.977 "raid_level": "raid5f", 00:16:04.977 "superblock": false, 00:16:04.977 "num_base_bdevs": 3, 00:16:04.977 "num_base_bdevs_discovered": 2, 00:16:04.977 "num_base_bdevs_operational": 2, 00:16:04.977 "base_bdevs_list": [ 00:16:04.977 { 00:16:04.977 "name": null, 00:16:04.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.977 "is_configured": false, 00:16:04.977 "data_offset": 0, 00:16:04.977 "data_size": 65536 00:16:04.977 }, 00:16:04.977 { 00:16:04.977 "name": "BaseBdev2", 00:16:04.977 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:04.977 "is_configured": true, 00:16:04.977 "data_offset": 0, 00:16:04.977 "data_size": 65536 00:16:04.977 }, 00:16:04.977 { 00:16:04.977 "name": "BaseBdev3", 00:16:04.977 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:04.977 "is_configured": true, 00:16:04.977 "data_offset": 0, 00:16:04.977 "data_size": 65536 00:16:04.977 } 00:16:04.977 ] 00:16:04.977 }' 00:16:04.977 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.978 [2024-11-20 10:39:08.321940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.978 [2024-11-20 10:39:08.338129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.978 10:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:04.978 [2024-11-20 10:39:08.345730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.913 10:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.172 "name": "raid_bdev1", 00:16:06.172 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:06.172 "strip_size_kb": 64, 00:16:06.172 "state": "online", 00:16:06.172 "raid_level": "raid5f", 00:16:06.172 "superblock": false, 00:16:06.172 "num_base_bdevs": 3, 00:16:06.172 "num_base_bdevs_discovered": 3, 00:16:06.172 "num_base_bdevs_operational": 3, 00:16:06.172 "process": { 00:16:06.172 "type": "rebuild", 00:16:06.172 "target": "spare", 00:16:06.172 "progress": { 00:16:06.172 "blocks": 20480, 00:16:06.172 "percent": 15 00:16:06.172 } 00:16:06.172 }, 00:16:06.172 "base_bdevs_list": [ 00:16:06.172 { 00:16:06.172 "name": "spare", 00:16:06.172 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:06.172 "is_configured": true, 00:16:06.172 "data_offset": 0, 00:16:06.172 "data_size": 65536 00:16:06.172 }, 00:16:06.172 { 00:16:06.172 "name": "BaseBdev2", 00:16:06.172 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:06.172 "is_configured": true, 00:16:06.172 "data_offset": 0, 00:16:06.172 "data_size": 65536 00:16:06.172 }, 00:16:06.172 { 00:16:06.172 "name": "BaseBdev3", 00:16:06.172 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:06.172 "is_configured": true, 00:16:06.172 "data_offset": 0, 00:16:06.172 "data_size": 65536 00:16:06.172 } 00:16:06.172 ] 00:16:06.172 }' 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.172 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.172 "name": "raid_bdev1", 00:16:06.172 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:06.172 "strip_size_kb": 64, 00:16:06.172 "state": "online", 00:16:06.172 "raid_level": "raid5f", 00:16:06.172 "superblock": false, 00:16:06.172 "num_base_bdevs": 3, 00:16:06.173 "num_base_bdevs_discovered": 3, 00:16:06.173 "num_base_bdevs_operational": 3, 00:16:06.173 "process": { 00:16:06.173 "type": "rebuild", 00:16:06.173 "target": "spare", 00:16:06.173 "progress": { 00:16:06.173 "blocks": 22528, 00:16:06.173 "percent": 17 00:16:06.173 } 00:16:06.173 }, 00:16:06.173 "base_bdevs_list": [ 00:16:06.173 { 00:16:06.173 "name": "spare", 00:16:06.173 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:06.173 "is_configured": true, 00:16:06.173 "data_offset": 0, 00:16:06.173 "data_size": 65536 00:16:06.173 }, 00:16:06.173 { 00:16:06.173 "name": "BaseBdev2", 00:16:06.173 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:06.173 "is_configured": true, 00:16:06.173 "data_offset": 0, 00:16:06.173 "data_size": 65536 00:16:06.173 }, 00:16:06.173 { 00:16:06.173 "name": "BaseBdev3", 00:16:06.173 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:06.173 "is_configured": true, 00:16:06.173 "data_offset": 0, 00:16:06.173 "data_size": 65536 00:16:06.173 } 00:16:06.173 ] 00:16:06.173 }' 00:16:06.173 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.173 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.173 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.173 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.173 10:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.574 "name": "raid_bdev1", 00:16:07.574 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:07.574 "strip_size_kb": 64, 00:16:07.574 "state": "online", 00:16:07.574 "raid_level": "raid5f", 00:16:07.574 "superblock": false, 00:16:07.574 "num_base_bdevs": 3, 00:16:07.574 "num_base_bdevs_discovered": 3, 00:16:07.574 "num_base_bdevs_operational": 3, 00:16:07.574 "process": { 00:16:07.574 "type": "rebuild", 00:16:07.574 "target": "spare", 00:16:07.574 "progress": { 00:16:07.574 "blocks": 47104, 00:16:07.574 "percent": 35 00:16:07.574 } 00:16:07.574 }, 00:16:07.574 "base_bdevs_list": [ 00:16:07.574 { 00:16:07.574 "name": "spare", 00:16:07.574 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:07.574 "is_configured": true, 00:16:07.574 "data_offset": 0, 00:16:07.574 "data_size": 65536 00:16:07.574 }, 00:16:07.574 { 00:16:07.574 "name": "BaseBdev2", 00:16:07.574 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:07.574 "is_configured": true, 00:16:07.574 "data_offset": 0, 00:16:07.574 "data_size": 65536 00:16:07.574 }, 00:16:07.574 { 00:16:07.574 "name": "BaseBdev3", 00:16:07.574 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:07.574 "is_configured": true, 00:16:07.574 "data_offset": 0, 00:16:07.574 "data_size": 65536 00:16:07.574 } 00:16:07.574 ] 00:16:07.574 }' 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.574 10:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.511 "name": "raid_bdev1", 00:16:08.511 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:08.511 "strip_size_kb": 64, 00:16:08.511 "state": "online", 00:16:08.511 "raid_level": "raid5f", 00:16:08.511 "superblock": false, 00:16:08.511 "num_base_bdevs": 3, 00:16:08.511 "num_base_bdevs_discovered": 3, 00:16:08.511 "num_base_bdevs_operational": 3, 00:16:08.511 "process": { 00:16:08.511 "type": "rebuild", 00:16:08.511 "target": "spare", 00:16:08.511 "progress": { 00:16:08.511 "blocks": 69632, 00:16:08.511 "percent": 53 00:16:08.511 } 00:16:08.511 }, 00:16:08.511 "base_bdevs_list": [ 00:16:08.511 { 00:16:08.511 "name": "spare", 00:16:08.511 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:08.511 "is_configured": true, 00:16:08.511 "data_offset": 0, 00:16:08.511 "data_size": 65536 00:16:08.511 }, 00:16:08.511 { 00:16:08.511 "name": "BaseBdev2", 00:16:08.511 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:08.511 "is_configured": true, 00:16:08.511 "data_offset": 0, 00:16:08.511 "data_size": 65536 00:16:08.511 }, 00:16:08.511 { 00:16:08.511 "name": "BaseBdev3", 00:16:08.511 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:08.511 "is_configured": true, 00:16:08.511 "data_offset": 0, 00:16:08.511 "data_size": 65536 00:16:08.511 } 00:16:08.511 ] 00:16:08.511 }' 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.511 10:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.889 10:39:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.889 10:39:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.889 "name": "raid_bdev1", 00:16:09.889 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:09.889 "strip_size_kb": 64, 00:16:09.889 "state": "online", 00:16:09.889 "raid_level": "raid5f", 00:16:09.889 "superblock": false, 00:16:09.889 "num_base_bdevs": 3, 00:16:09.889 "num_base_bdevs_discovered": 3, 00:16:09.889 "num_base_bdevs_operational": 3, 00:16:09.889 "process": { 00:16:09.889 "type": "rebuild", 00:16:09.889 "target": "spare", 00:16:09.889 "progress": { 00:16:09.889 "blocks": 92160, 00:16:09.889 "percent": 70 00:16:09.889 } 00:16:09.889 }, 00:16:09.889 "base_bdevs_list": [ 00:16:09.889 { 00:16:09.889 "name": "spare", 00:16:09.889 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:09.889 "is_configured": true, 00:16:09.889 "data_offset": 0, 00:16:09.889 "data_size": 65536 00:16:09.890 }, 00:16:09.890 { 00:16:09.890 "name": "BaseBdev2", 00:16:09.890 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:09.890 "is_configured": true, 00:16:09.890 "data_offset": 0, 00:16:09.890 "data_size": 65536 00:16:09.890 }, 00:16:09.890 { 00:16:09.890 "name": "BaseBdev3", 00:16:09.890 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:09.890 "is_configured": true, 00:16:09.890 "data_offset": 0, 00:16:09.890 "data_size": 65536 00:16:09.890 } 00:16:09.890 ] 00:16:09.890 }' 00:16:09.890 10:39:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.890 10:39:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.890 10:39:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.890 10:39:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.890 10:39:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.826 "name": "raid_bdev1", 00:16:10.826 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:10.826 "strip_size_kb": 64, 00:16:10.826 "state": "online", 00:16:10.826 "raid_level": "raid5f", 00:16:10.826 "superblock": false, 00:16:10.826 "num_base_bdevs": 3, 00:16:10.826 "num_base_bdevs_discovered": 3, 00:16:10.826 "num_base_bdevs_operational": 3, 00:16:10.826 "process": { 00:16:10.826 "type": "rebuild", 00:16:10.826 "target": "spare", 00:16:10.826 "progress": { 00:16:10.826 "blocks": 116736, 00:16:10.826 "percent": 89 00:16:10.826 } 00:16:10.826 }, 00:16:10.826 "base_bdevs_list": [ 00:16:10.826 { 00:16:10.826 "name": "spare", 00:16:10.826 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:10.826 "is_configured": true, 00:16:10.826 "data_offset": 0, 00:16:10.826 "data_size": 65536 00:16:10.826 }, 00:16:10.826 { 00:16:10.826 "name": "BaseBdev2", 00:16:10.826 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:10.826 "is_configured": true, 00:16:10.826 "data_offset": 0, 00:16:10.826 "data_size": 65536 00:16:10.826 }, 00:16:10.826 { 00:16:10.826 "name": "BaseBdev3", 00:16:10.826 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:10.826 "is_configured": true, 00:16:10.826 "data_offset": 0, 00:16:10.826 "data_size": 65536 00:16:10.826 } 00:16:10.826 ] 00:16:10.826 }' 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.826 10:39:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.394 [2024-11-20 10:39:14.800012] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:11.394 [2024-11-20 10:39:14.800217] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:11.394 [2024-11-20 10:39:14.800291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.963 "name": "raid_bdev1", 00:16:11.963 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:11.963 "strip_size_kb": 64, 00:16:11.963 "state": "online", 00:16:11.963 "raid_level": "raid5f", 00:16:11.963 "superblock": false, 00:16:11.963 "num_base_bdevs": 3, 00:16:11.963 "num_base_bdevs_discovered": 3, 00:16:11.963 "num_base_bdevs_operational": 3, 00:16:11.963 "base_bdevs_list": [ 00:16:11.963 { 00:16:11.963 "name": "spare", 00:16:11.963 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:11.963 "is_configured": true, 00:16:11.963 "data_offset": 0, 00:16:11.963 "data_size": 65536 00:16:11.963 }, 00:16:11.963 { 00:16:11.963 "name": "BaseBdev2", 00:16:11.963 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:11.963 "is_configured": true, 00:16:11.963 "data_offset": 0, 00:16:11.963 "data_size": 65536 00:16:11.963 }, 00:16:11.963 { 00:16:11.963 "name": "BaseBdev3", 00:16:11.963 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:11.963 "is_configured": true, 00:16:11.963 "data_offset": 0, 00:16:11.963 "data_size": 65536 00:16:11.963 } 00:16:11.963 ] 00:16:11.963 }' 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.963 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.223 "name": "raid_bdev1", 00:16:12.223 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:12.223 "strip_size_kb": 64, 00:16:12.223 "state": "online", 00:16:12.223 "raid_level": "raid5f", 00:16:12.223 "superblock": false, 00:16:12.223 "num_base_bdevs": 3, 00:16:12.223 "num_base_bdevs_discovered": 3, 00:16:12.223 "num_base_bdevs_operational": 3, 00:16:12.223 "base_bdevs_list": [ 00:16:12.223 { 00:16:12.223 "name": "spare", 00:16:12.223 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:12.223 "is_configured": true, 00:16:12.223 "data_offset": 0, 00:16:12.223 "data_size": 65536 00:16:12.223 }, 00:16:12.223 { 00:16:12.223 "name": "BaseBdev2", 00:16:12.223 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:12.223 "is_configured": true, 00:16:12.223 "data_offset": 0, 00:16:12.223 "data_size": 65536 00:16:12.223 }, 00:16:12.223 { 00:16:12.223 "name": "BaseBdev3", 00:16:12.223 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:12.223 "is_configured": true, 00:16:12.223 "data_offset": 0, 00:16:12.223 "data_size": 65536 00:16:12.223 } 00:16:12.223 ] 00:16:12.223 }' 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.223 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.224 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.224 "name": "raid_bdev1", 00:16:12.224 "uuid": "c49f6eb2-2cf9-443a-88b5-2e609a1fc5b0", 00:16:12.224 "strip_size_kb": 64, 00:16:12.224 "state": "online", 00:16:12.224 "raid_level": "raid5f", 00:16:12.224 "superblock": false, 00:16:12.224 "num_base_bdevs": 3, 00:16:12.224 "num_base_bdevs_discovered": 3, 00:16:12.224 "num_base_bdevs_operational": 3, 00:16:12.224 "base_bdevs_list": [ 00:16:12.224 { 00:16:12.224 "name": "spare", 00:16:12.224 "uuid": "646154d4-5c3b-52f7-80a0-e1b8d57759a4", 00:16:12.224 "is_configured": true, 00:16:12.224 "data_offset": 0, 00:16:12.224 "data_size": 65536 00:16:12.224 }, 00:16:12.224 { 00:16:12.224 "name": "BaseBdev2", 00:16:12.224 "uuid": "2a259816-336a-5c94-9f76-a7cee83cb265", 00:16:12.224 "is_configured": true, 00:16:12.224 "data_offset": 0, 00:16:12.224 "data_size": 65536 00:16:12.224 }, 00:16:12.224 { 00:16:12.224 "name": "BaseBdev3", 00:16:12.224 "uuid": "3f88bf83-03a0-5ffc-83d0-f819136a8425", 00:16:12.224 "is_configured": true, 00:16:12.224 "data_offset": 0, 00:16:12.224 "data_size": 65536 00:16:12.224 } 00:16:12.224 ] 00:16:12.224 }' 00:16:12.224 10:39:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.224 10:39:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 [2024-11-20 10:39:16.022493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.793 [2024-11-20 10:39:16.022579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.793 [2024-11-20 10:39:16.022694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.793 [2024-11-20 10:39:16.022790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.793 [2024-11-20 10:39:16.022807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.793 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:13.059 /dev/nbd0 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.059 1+0 records in 00:16:13.059 1+0 records out 00:16:13.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562646 s, 7.3 MB/s 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.059 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:13.327 /dev/nbd1 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.327 1+0 records in 00:16:13.327 1+0 records out 00:16:13.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405601 s, 10.1 MB/s 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.327 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:13.587 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.588 10:39:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81736 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81736 ']' 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81736 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81736 00:16:13.847 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.847 killing process with pid 81736 00:16:13.848 Received shutdown signal, test time was about 60.000000 seconds 00:16:13.848 00:16:13.848 Latency(us) 00:16:13.848 [2024-11-20T10:39:17.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.848 [2024-11-20T10:39:17.327Z] =================================================================================================================== 00:16:13.848 [2024-11-20T10:39:17.327Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:13.848 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.848 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81736' 00:16:13.848 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81736 00:16:13.848 [2024-11-20 10:39:17.257513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.848 10:39:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81736 00:16:14.416 [2024-11-20 10:39:17.651689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.355 10:39:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.355 ************************************ 00:16:15.355 END TEST raid5f_rebuild_test 00:16:15.355 ************************************ 00:16:15.355 00:16:15.355 real 0m15.256s 00:16:15.355 user 0m18.768s 00:16:15.355 sys 0m2.006s 00:16:15.355 10:39:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.355 10:39:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 10:39:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:15.355 10:39:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:15.356 10:39:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.356 10:39:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.356 ************************************ 00:16:15.356 START TEST raid5f_rebuild_test_sb 00:16:15.356 ************************************ 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82176 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82176 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82176 ']' 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.356 10:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 [2024-11-20 10:39:18.892987] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:15.616 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:15.616 Zero copy mechanism will not be used. 00:16:15.616 [2024-11-20 10:39:18.893200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82176 ] 00:16:15.616 [2024-11-20 10:39:19.068028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.875 [2024-11-20 10:39:19.180248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.135 [2024-11-20 10:39:19.369896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.135 [2024-11-20 10:39:19.370053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.395 BaseBdev1_malloc 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.395 [2024-11-20 10:39:19.779255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:16.395 [2024-11-20 10:39:19.779389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.395 [2024-11-20 10:39:19.779443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:16.395 [2024-11-20 10:39:19.779476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.395 [2024-11-20 10:39:19.781545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.395 [2024-11-20 10:39:19.781617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.395 BaseBdev1 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.395 BaseBdev2_malloc 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.395 [2024-11-20 10:39:19.835118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:16.395 [2024-11-20 10:39:19.835254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.395 [2024-11-20 10:39:19.835291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:16.395 [2024-11-20 10:39:19.835330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.395 [2024-11-20 10:39:19.837395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.395 [2024-11-20 10:39:19.837464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:16.395 BaseBdev2 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.395 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.656 BaseBdev3_malloc 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.656 [2024-11-20 10:39:19.900309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:16.656 [2024-11-20 10:39:19.900449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.656 [2024-11-20 10:39:19.900489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.656 [2024-11-20 10:39:19.900517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.656 [2024-11-20 10:39:19.902575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.656 [2024-11-20 10:39:19.902649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:16.656 BaseBdev3 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.656 spare_malloc 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.656 spare_delay 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.656 [2024-11-20 10:39:19.962682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.656 [2024-11-20 10:39:19.962780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.656 [2024-11-20 10:39:19.962813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:16.656 [2024-11-20 10:39:19.962842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.656 [2024-11-20 10:39:19.964962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.656 [2024-11-20 10:39:19.965055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.656 spare 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.656 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.656 [2024-11-20 10:39:19.974731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.656 [2024-11-20 10:39:19.976547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.656 [2024-11-20 10:39:19.976661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.656 [2024-11-20 10:39:19.976890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:16.656 [2024-11-20 10:39:19.976947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:16.656 [2024-11-20 10:39:19.977234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:16.656 [2024-11-20 10:39:19.982168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:16.656 [2024-11-20 10:39:19.982222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:16.656 [2024-11-20 10:39:19.982461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.657 10:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.657 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.657 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.657 "name": "raid_bdev1", 00:16:16.657 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:16.657 "strip_size_kb": 64, 00:16:16.657 "state": "online", 00:16:16.657 "raid_level": "raid5f", 00:16:16.657 "superblock": true, 00:16:16.657 "num_base_bdevs": 3, 00:16:16.657 "num_base_bdevs_discovered": 3, 00:16:16.657 "num_base_bdevs_operational": 3, 00:16:16.657 "base_bdevs_list": [ 00:16:16.657 { 00:16:16.657 "name": "BaseBdev1", 00:16:16.657 "uuid": "a85c3e8e-6f11-5748-8f4f-0a27d584bb05", 00:16:16.657 "is_configured": true, 00:16:16.657 "data_offset": 2048, 00:16:16.657 "data_size": 63488 00:16:16.657 }, 00:16:16.657 { 00:16:16.657 "name": "BaseBdev2", 00:16:16.657 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:16.657 "is_configured": true, 00:16:16.657 "data_offset": 2048, 00:16:16.657 "data_size": 63488 00:16:16.657 }, 00:16:16.657 { 00:16:16.657 "name": "BaseBdev3", 00:16:16.657 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:16.657 "is_configured": true, 00:16:16.657 "data_offset": 2048, 00:16:16.657 "data_size": 63488 00:16:16.657 } 00:16:16.657 ] 00:16:16.657 }' 00:16:16.657 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.657 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.227 [2024-11-20 10:39:20.468460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:17.227 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.228 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:17.488 [2024-11-20 10:39:20.743852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:17.488 /dev/nbd0 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.488 1+0 records in 00:16:17.488 1+0 records out 00:16:17.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570255 s, 7.2 MB/s 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:17.488 10:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:17.747 496+0 records in 00:16:17.747 496+0 records out 00:16:17.747 65011712 bytes (65 MB, 62 MiB) copied, 0.364144 s, 179 MB/s 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.747 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.006 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.006 [2024-11-20 10:39:21.391114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.006 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.006 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.006 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.007 [2024-11-20 10:39:21.408043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.007 "name": "raid_bdev1", 00:16:18.007 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:18.007 "strip_size_kb": 64, 00:16:18.007 "state": "online", 00:16:18.007 "raid_level": "raid5f", 00:16:18.007 "superblock": true, 00:16:18.007 "num_base_bdevs": 3, 00:16:18.007 "num_base_bdevs_discovered": 2, 00:16:18.007 "num_base_bdevs_operational": 2, 00:16:18.007 "base_bdevs_list": [ 00:16:18.007 { 00:16:18.007 "name": null, 00:16:18.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.007 "is_configured": false, 00:16:18.007 "data_offset": 0, 00:16:18.007 "data_size": 63488 00:16:18.007 }, 00:16:18.007 { 00:16:18.007 "name": "BaseBdev2", 00:16:18.007 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:18.007 "is_configured": true, 00:16:18.007 "data_offset": 2048, 00:16:18.007 "data_size": 63488 00:16:18.007 }, 00:16:18.007 { 00:16:18.007 "name": "BaseBdev3", 00:16:18.007 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:18.007 "is_configured": true, 00:16:18.007 "data_offset": 2048, 00:16:18.007 "data_size": 63488 00:16:18.007 } 00:16:18.007 ] 00:16:18.007 }' 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.007 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.575 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.575 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.575 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.575 [2024-11-20 10:39:21.851321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.575 [2024-11-20 10:39:21.870086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:18.575 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.575 10:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:18.575 [2024-11-20 10:39:21.878456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.516 "name": "raid_bdev1", 00:16:19.516 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:19.516 "strip_size_kb": 64, 00:16:19.516 "state": "online", 00:16:19.516 "raid_level": "raid5f", 00:16:19.516 "superblock": true, 00:16:19.516 "num_base_bdevs": 3, 00:16:19.516 "num_base_bdevs_discovered": 3, 00:16:19.516 "num_base_bdevs_operational": 3, 00:16:19.516 "process": { 00:16:19.516 "type": "rebuild", 00:16:19.516 "target": "spare", 00:16:19.516 "progress": { 00:16:19.516 "blocks": 20480, 00:16:19.516 "percent": 16 00:16:19.516 } 00:16:19.516 }, 00:16:19.516 "base_bdevs_list": [ 00:16:19.516 { 00:16:19.516 "name": "spare", 00:16:19.516 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:19.516 "is_configured": true, 00:16:19.516 "data_offset": 2048, 00:16:19.516 "data_size": 63488 00:16:19.516 }, 00:16:19.516 { 00:16:19.516 "name": "BaseBdev2", 00:16:19.516 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:19.516 "is_configured": true, 00:16:19.516 "data_offset": 2048, 00:16:19.516 "data_size": 63488 00:16:19.516 }, 00:16:19.516 { 00:16:19.516 "name": "BaseBdev3", 00:16:19.516 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:19.516 "is_configured": true, 00:16:19.516 "data_offset": 2048, 00:16:19.516 "data_size": 63488 00:16:19.516 } 00:16:19.516 ] 00:16:19.516 }' 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.516 10:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.778 [2024-11-20 10:39:23.025068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.778 [2024-11-20 10:39:23.089028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.778 [2024-11-20 10:39:23.089114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.778 [2024-11-20 10:39:23.089134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.778 [2024-11-20 10:39:23.089144] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.778 "name": "raid_bdev1", 00:16:19.778 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:19.778 "strip_size_kb": 64, 00:16:19.778 "state": "online", 00:16:19.778 "raid_level": "raid5f", 00:16:19.778 "superblock": true, 00:16:19.778 "num_base_bdevs": 3, 00:16:19.778 "num_base_bdevs_discovered": 2, 00:16:19.778 "num_base_bdevs_operational": 2, 00:16:19.778 "base_bdevs_list": [ 00:16:19.778 { 00:16:19.778 "name": null, 00:16:19.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.778 "is_configured": false, 00:16:19.778 "data_offset": 0, 00:16:19.778 "data_size": 63488 00:16:19.778 }, 00:16:19.778 { 00:16:19.778 "name": "BaseBdev2", 00:16:19.778 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:19.778 "is_configured": true, 00:16:19.778 "data_offset": 2048, 00:16:19.778 "data_size": 63488 00:16:19.778 }, 00:16:19.778 { 00:16:19.778 "name": "BaseBdev3", 00:16:19.778 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:19.778 "is_configured": true, 00:16:19.778 "data_offset": 2048, 00:16:19.778 "data_size": 63488 00:16:19.778 } 00:16:19.778 ] 00:16:19.778 }' 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.778 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.347 "name": "raid_bdev1", 00:16:20.347 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:20.347 "strip_size_kb": 64, 00:16:20.347 "state": "online", 00:16:20.347 "raid_level": "raid5f", 00:16:20.347 "superblock": true, 00:16:20.347 "num_base_bdevs": 3, 00:16:20.347 "num_base_bdevs_discovered": 2, 00:16:20.347 "num_base_bdevs_operational": 2, 00:16:20.347 "base_bdevs_list": [ 00:16:20.347 { 00:16:20.347 "name": null, 00:16:20.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.347 "is_configured": false, 00:16:20.347 "data_offset": 0, 00:16:20.347 "data_size": 63488 00:16:20.347 }, 00:16:20.347 { 00:16:20.347 "name": "BaseBdev2", 00:16:20.347 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:20.347 "is_configured": true, 00:16:20.347 "data_offset": 2048, 00:16:20.347 "data_size": 63488 00:16:20.347 }, 00:16:20.347 { 00:16:20.347 "name": "BaseBdev3", 00:16:20.347 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:20.347 "is_configured": true, 00:16:20.347 "data_offset": 2048, 00:16:20.347 "data_size": 63488 00:16:20.347 } 00:16:20.347 ] 00:16:20.347 }' 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 [2024-11-20 10:39:23.678997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.347 [2024-11-20 10:39:23.695003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.347 10:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:20.347 [2024-11-20 10:39:23.702113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.288 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.288 "name": "raid_bdev1", 00:16:21.288 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:21.288 "strip_size_kb": 64, 00:16:21.288 "state": "online", 00:16:21.288 "raid_level": "raid5f", 00:16:21.288 "superblock": true, 00:16:21.288 "num_base_bdevs": 3, 00:16:21.288 "num_base_bdevs_discovered": 3, 00:16:21.288 "num_base_bdevs_operational": 3, 00:16:21.288 "process": { 00:16:21.288 "type": "rebuild", 00:16:21.288 "target": "spare", 00:16:21.288 "progress": { 00:16:21.288 "blocks": 20480, 00:16:21.288 "percent": 16 00:16:21.288 } 00:16:21.288 }, 00:16:21.288 "base_bdevs_list": [ 00:16:21.288 { 00:16:21.288 "name": "spare", 00:16:21.288 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:21.288 "is_configured": true, 00:16:21.289 "data_offset": 2048, 00:16:21.289 "data_size": 63488 00:16:21.289 }, 00:16:21.289 { 00:16:21.289 "name": "BaseBdev2", 00:16:21.289 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:21.289 "is_configured": true, 00:16:21.289 "data_offset": 2048, 00:16:21.289 "data_size": 63488 00:16:21.289 }, 00:16:21.289 { 00:16:21.289 "name": "BaseBdev3", 00:16:21.289 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:21.289 "is_configured": true, 00:16:21.289 "data_offset": 2048, 00:16:21.289 "data_size": 63488 00:16:21.289 } 00:16:21.289 ] 00:16:21.289 }' 00:16:21.289 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:21.549 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=569 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.549 "name": "raid_bdev1", 00:16:21.549 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:21.549 "strip_size_kb": 64, 00:16:21.549 "state": "online", 00:16:21.549 "raid_level": "raid5f", 00:16:21.549 "superblock": true, 00:16:21.549 "num_base_bdevs": 3, 00:16:21.549 "num_base_bdevs_discovered": 3, 00:16:21.549 "num_base_bdevs_operational": 3, 00:16:21.549 "process": { 00:16:21.549 "type": "rebuild", 00:16:21.549 "target": "spare", 00:16:21.549 "progress": { 00:16:21.549 "blocks": 22528, 00:16:21.549 "percent": 17 00:16:21.549 } 00:16:21.549 }, 00:16:21.549 "base_bdevs_list": [ 00:16:21.549 { 00:16:21.549 "name": "spare", 00:16:21.549 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:21.549 "is_configured": true, 00:16:21.549 "data_offset": 2048, 00:16:21.549 "data_size": 63488 00:16:21.549 }, 00:16:21.549 { 00:16:21.549 "name": "BaseBdev2", 00:16:21.549 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:21.549 "is_configured": true, 00:16:21.549 "data_offset": 2048, 00:16:21.549 "data_size": 63488 00:16:21.549 }, 00:16:21.549 { 00:16:21.549 "name": "BaseBdev3", 00:16:21.549 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:21.549 "is_configured": true, 00:16:21.549 "data_offset": 2048, 00:16:21.549 "data_size": 63488 00:16:21.549 } 00:16:21.549 ] 00:16:21.549 }' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.549 10:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.931 10:39:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.931 "name": "raid_bdev1", 00:16:22.931 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:22.931 "strip_size_kb": 64, 00:16:22.931 "state": "online", 00:16:22.931 "raid_level": "raid5f", 00:16:22.931 "superblock": true, 00:16:22.931 "num_base_bdevs": 3, 00:16:22.931 "num_base_bdevs_discovered": 3, 00:16:22.931 "num_base_bdevs_operational": 3, 00:16:22.931 "process": { 00:16:22.931 "type": "rebuild", 00:16:22.931 "target": "spare", 00:16:22.931 "progress": { 00:16:22.931 "blocks": 45056, 00:16:22.931 "percent": 35 00:16:22.931 } 00:16:22.931 }, 00:16:22.931 "base_bdevs_list": [ 00:16:22.931 { 00:16:22.931 "name": "spare", 00:16:22.931 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:22.931 "is_configured": true, 00:16:22.931 "data_offset": 2048, 00:16:22.931 "data_size": 63488 00:16:22.931 }, 00:16:22.931 { 00:16:22.931 "name": "BaseBdev2", 00:16:22.931 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:22.931 "is_configured": true, 00:16:22.931 "data_offset": 2048, 00:16:22.931 "data_size": 63488 00:16:22.931 }, 00:16:22.931 { 00:16:22.931 "name": "BaseBdev3", 00:16:22.931 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:22.931 "is_configured": true, 00:16:22.931 "data_offset": 2048, 00:16:22.931 "data_size": 63488 00:16:22.931 } 00:16:22.931 ] 00:16:22.931 }' 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.931 10:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.870 "name": "raid_bdev1", 00:16:23.870 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:23.870 "strip_size_kb": 64, 00:16:23.870 "state": "online", 00:16:23.870 "raid_level": "raid5f", 00:16:23.870 "superblock": true, 00:16:23.870 "num_base_bdevs": 3, 00:16:23.870 "num_base_bdevs_discovered": 3, 00:16:23.870 "num_base_bdevs_operational": 3, 00:16:23.870 "process": { 00:16:23.870 "type": "rebuild", 00:16:23.870 "target": "spare", 00:16:23.870 "progress": { 00:16:23.870 "blocks": 69632, 00:16:23.870 "percent": 54 00:16:23.870 } 00:16:23.870 }, 00:16:23.870 "base_bdevs_list": [ 00:16:23.870 { 00:16:23.870 "name": "spare", 00:16:23.870 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:23.870 "is_configured": true, 00:16:23.870 "data_offset": 2048, 00:16:23.870 "data_size": 63488 00:16:23.870 }, 00:16:23.870 { 00:16:23.870 "name": "BaseBdev2", 00:16:23.870 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:23.870 "is_configured": true, 00:16:23.870 "data_offset": 2048, 00:16:23.870 "data_size": 63488 00:16:23.870 }, 00:16:23.870 { 00:16:23.870 "name": "BaseBdev3", 00:16:23.870 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:23.870 "is_configured": true, 00:16:23.870 "data_offset": 2048, 00:16:23.870 "data_size": 63488 00:16:23.870 } 00:16:23.870 ] 00:16:23.870 }' 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.870 10:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.809 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.809 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.809 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.809 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.809 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.809 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.068 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.068 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.068 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.068 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.068 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.068 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.068 "name": "raid_bdev1", 00:16:25.068 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:25.068 "strip_size_kb": 64, 00:16:25.068 "state": "online", 00:16:25.068 "raid_level": "raid5f", 00:16:25.068 "superblock": true, 00:16:25.068 "num_base_bdevs": 3, 00:16:25.068 "num_base_bdevs_discovered": 3, 00:16:25.068 "num_base_bdevs_operational": 3, 00:16:25.069 "process": { 00:16:25.069 "type": "rebuild", 00:16:25.069 "target": "spare", 00:16:25.069 "progress": { 00:16:25.069 "blocks": 92160, 00:16:25.069 "percent": 72 00:16:25.069 } 00:16:25.069 }, 00:16:25.069 "base_bdevs_list": [ 00:16:25.069 { 00:16:25.069 "name": "spare", 00:16:25.069 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:25.069 "is_configured": true, 00:16:25.069 "data_offset": 2048, 00:16:25.069 "data_size": 63488 00:16:25.069 }, 00:16:25.069 { 00:16:25.069 "name": "BaseBdev2", 00:16:25.069 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:25.069 "is_configured": true, 00:16:25.069 "data_offset": 2048, 00:16:25.069 "data_size": 63488 00:16:25.069 }, 00:16:25.069 { 00:16:25.069 "name": "BaseBdev3", 00:16:25.069 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:25.069 "is_configured": true, 00:16:25.069 "data_offset": 2048, 00:16:25.069 "data_size": 63488 00:16:25.069 } 00:16:25.069 ] 00:16:25.069 }' 00:16:25.069 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.069 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.069 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.069 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.069 10:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.006 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.266 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.266 "name": "raid_bdev1", 00:16:26.266 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:26.266 "strip_size_kb": 64, 00:16:26.266 "state": "online", 00:16:26.266 "raid_level": "raid5f", 00:16:26.266 "superblock": true, 00:16:26.266 "num_base_bdevs": 3, 00:16:26.266 "num_base_bdevs_discovered": 3, 00:16:26.266 "num_base_bdevs_operational": 3, 00:16:26.266 "process": { 00:16:26.266 "type": "rebuild", 00:16:26.266 "target": "spare", 00:16:26.266 "progress": { 00:16:26.266 "blocks": 114688, 00:16:26.266 "percent": 90 00:16:26.266 } 00:16:26.266 }, 00:16:26.266 "base_bdevs_list": [ 00:16:26.266 { 00:16:26.266 "name": "spare", 00:16:26.266 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:26.266 "is_configured": true, 00:16:26.266 "data_offset": 2048, 00:16:26.266 "data_size": 63488 00:16:26.266 }, 00:16:26.266 { 00:16:26.266 "name": "BaseBdev2", 00:16:26.266 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:26.266 "is_configured": true, 00:16:26.266 "data_offset": 2048, 00:16:26.266 "data_size": 63488 00:16:26.266 }, 00:16:26.266 { 00:16:26.266 "name": "BaseBdev3", 00:16:26.266 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:26.266 "is_configured": true, 00:16:26.266 "data_offset": 2048, 00:16:26.266 "data_size": 63488 00:16:26.266 } 00:16:26.266 ] 00:16:26.266 }' 00:16:26.266 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.266 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.266 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.266 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.266 10:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.526 [2024-11-20 10:39:29.953404] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:26.526 [2024-11-20 10:39:29.953491] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:26.526 [2024-11-20 10:39:29.953624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.095 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.095 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.095 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.095 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.095 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.095 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.355 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.355 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.355 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.355 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.355 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.355 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.355 "name": "raid_bdev1", 00:16:27.355 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:27.355 "strip_size_kb": 64, 00:16:27.355 "state": "online", 00:16:27.355 "raid_level": "raid5f", 00:16:27.355 "superblock": true, 00:16:27.355 "num_base_bdevs": 3, 00:16:27.355 "num_base_bdevs_discovered": 3, 00:16:27.355 "num_base_bdevs_operational": 3, 00:16:27.355 "base_bdevs_list": [ 00:16:27.355 { 00:16:27.355 "name": "spare", 00:16:27.355 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:27.355 "is_configured": true, 00:16:27.355 "data_offset": 2048, 00:16:27.355 "data_size": 63488 00:16:27.355 }, 00:16:27.355 { 00:16:27.355 "name": "BaseBdev2", 00:16:27.355 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:27.355 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "name": "BaseBdev3", 00:16:27.356 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 } 00:16:27.356 ] 00:16:27.356 }' 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.356 "name": "raid_bdev1", 00:16:27.356 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:27.356 "strip_size_kb": 64, 00:16:27.356 "state": "online", 00:16:27.356 "raid_level": "raid5f", 00:16:27.356 "superblock": true, 00:16:27.356 "num_base_bdevs": 3, 00:16:27.356 "num_base_bdevs_discovered": 3, 00:16:27.356 "num_base_bdevs_operational": 3, 00:16:27.356 "base_bdevs_list": [ 00:16:27.356 { 00:16:27.356 "name": "spare", 00:16:27.356 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "name": "BaseBdev2", 00:16:27.356 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "name": "BaseBdev3", 00:16:27.356 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 } 00:16:27.356 ] 00:16:27.356 }' 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.356 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.616 "name": "raid_bdev1", 00:16:27.616 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:27.616 "strip_size_kb": 64, 00:16:27.616 "state": "online", 00:16:27.616 "raid_level": "raid5f", 00:16:27.616 "superblock": true, 00:16:27.616 "num_base_bdevs": 3, 00:16:27.616 "num_base_bdevs_discovered": 3, 00:16:27.616 "num_base_bdevs_operational": 3, 00:16:27.616 "base_bdevs_list": [ 00:16:27.616 { 00:16:27.616 "name": "spare", 00:16:27.616 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:27.616 "is_configured": true, 00:16:27.616 "data_offset": 2048, 00:16:27.616 "data_size": 63488 00:16:27.616 }, 00:16:27.616 { 00:16:27.616 "name": "BaseBdev2", 00:16:27.616 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:27.616 "is_configured": true, 00:16:27.616 "data_offset": 2048, 00:16:27.616 "data_size": 63488 00:16:27.616 }, 00:16:27.616 { 00:16:27.616 "name": "BaseBdev3", 00:16:27.616 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:27.616 "is_configured": true, 00:16:27.616 "data_offset": 2048, 00:16:27.616 "data_size": 63488 00:16:27.616 } 00:16:27.616 ] 00:16:27.616 }' 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.616 10:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.877 [2024-11-20 10:39:31.322637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.877 [2024-11-20 10:39:31.322720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.877 [2024-11-20 10:39:31.322834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.877 [2024-11-20 10:39:31.322958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.877 [2024-11-20 10:39:31.323022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.877 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:28.138 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:28.138 /dev/nbd0 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.411 1+0 records in 00:16:28.411 1+0 records out 00:16:28.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464007 s, 8.8 MB/s 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:28.411 /dev/nbd1 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.411 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.688 1+0 records in 00:16:28.688 1+0 records out 00:16:28.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515803 s, 7.9 MB/s 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:28.688 10:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.688 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.948 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 [2024-11-20 10:39:32.579068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.207 [2024-11-20 10:39:32.579143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.207 [2024-11-20 10:39:32.579168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:29.207 [2024-11-20 10:39:32.579181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.207 [2024-11-20 10:39:32.581755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.207 [2024-11-20 10:39:32.581801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.207 [2024-11-20 10:39:32.581899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:29.208 [2024-11-20 10:39:32.581981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.208 [2024-11-20 10:39:32.582167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.208 [2024-11-20 10:39:32.582289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.208 spare 00:16:29.208 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.208 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:29.208 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.208 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.208 [2024-11-20 10:39:32.682219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:29.208 [2024-11-20 10:39:32.682254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:29.208 [2024-11-20 10:39:32.682560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:29.467 [2024-11-20 10:39:32.688385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:29.467 [2024-11-20 10:39:32.688410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:29.467 [2024-11-20 10:39:32.688606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.467 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.467 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.467 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.467 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.467 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.467 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.468 "name": "raid_bdev1", 00:16:29.468 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:29.468 "strip_size_kb": 64, 00:16:29.468 "state": "online", 00:16:29.468 "raid_level": "raid5f", 00:16:29.468 "superblock": true, 00:16:29.468 "num_base_bdevs": 3, 00:16:29.468 "num_base_bdevs_discovered": 3, 00:16:29.468 "num_base_bdevs_operational": 3, 00:16:29.468 "base_bdevs_list": [ 00:16:29.468 { 00:16:29.468 "name": "spare", 00:16:29.468 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:29.468 "is_configured": true, 00:16:29.468 "data_offset": 2048, 00:16:29.468 "data_size": 63488 00:16:29.468 }, 00:16:29.468 { 00:16:29.468 "name": "BaseBdev2", 00:16:29.468 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:29.468 "is_configured": true, 00:16:29.468 "data_offset": 2048, 00:16:29.468 "data_size": 63488 00:16:29.468 }, 00:16:29.468 { 00:16:29.468 "name": "BaseBdev3", 00:16:29.468 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:29.468 "is_configured": true, 00:16:29.468 "data_offset": 2048, 00:16:29.468 "data_size": 63488 00:16:29.468 } 00:16:29.468 ] 00:16:29.468 }' 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.468 10:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.728 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.988 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.988 "name": "raid_bdev1", 00:16:29.988 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:29.988 "strip_size_kb": 64, 00:16:29.988 "state": "online", 00:16:29.988 "raid_level": "raid5f", 00:16:29.988 "superblock": true, 00:16:29.988 "num_base_bdevs": 3, 00:16:29.988 "num_base_bdevs_discovered": 3, 00:16:29.988 "num_base_bdevs_operational": 3, 00:16:29.988 "base_bdevs_list": [ 00:16:29.988 { 00:16:29.988 "name": "spare", 00:16:29.988 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:29.988 "is_configured": true, 00:16:29.988 "data_offset": 2048, 00:16:29.988 "data_size": 63488 00:16:29.988 }, 00:16:29.988 { 00:16:29.988 "name": "BaseBdev2", 00:16:29.988 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:29.988 "is_configured": true, 00:16:29.988 "data_offset": 2048, 00:16:29.989 "data_size": 63488 00:16:29.989 }, 00:16:29.989 { 00:16:29.989 "name": "BaseBdev3", 00:16:29.989 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:29.989 "is_configured": true, 00:16:29.989 "data_offset": 2048, 00:16:29.989 "data_size": 63488 00:16:29.989 } 00:16:29.989 ] 00:16:29.989 }' 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.989 [2024-11-20 10:39:33.346265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.989 "name": "raid_bdev1", 00:16:29.989 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:29.989 "strip_size_kb": 64, 00:16:29.989 "state": "online", 00:16:29.989 "raid_level": "raid5f", 00:16:29.989 "superblock": true, 00:16:29.989 "num_base_bdevs": 3, 00:16:29.989 "num_base_bdevs_discovered": 2, 00:16:29.989 "num_base_bdevs_operational": 2, 00:16:29.989 "base_bdevs_list": [ 00:16:29.989 { 00:16:29.989 "name": null, 00:16:29.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.989 "is_configured": false, 00:16:29.989 "data_offset": 0, 00:16:29.989 "data_size": 63488 00:16:29.989 }, 00:16:29.989 { 00:16:29.989 "name": "BaseBdev2", 00:16:29.989 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:29.989 "is_configured": true, 00:16:29.989 "data_offset": 2048, 00:16:29.989 "data_size": 63488 00:16:29.989 }, 00:16:29.989 { 00:16:29.989 "name": "BaseBdev3", 00:16:29.989 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:29.989 "is_configured": true, 00:16:29.989 "data_offset": 2048, 00:16:29.989 "data_size": 63488 00:16:29.989 } 00:16:29.989 ] 00:16:29.989 }' 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.989 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.559 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.559 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.559 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.559 [2024-11-20 10:39:33.825452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.559 [2024-11-20 10:39:33.825694] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:30.559 [2024-11-20 10:39:33.825758] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:30.559 [2024-11-20 10:39:33.825825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.559 [2024-11-20 10:39:33.841634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:30.559 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.559 10:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:30.559 [2024-11-20 10:39:33.848366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.499 "name": "raid_bdev1", 00:16:31.499 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:31.499 "strip_size_kb": 64, 00:16:31.499 "state": "online", 00:16:31.499 "raid_level": "raid5f", 00:16:31.499 "superblock": true, 00:16:31.499 "num_base_bdevs": 3, 00:16:31.499 "num_base_bdevs_discovered": 3, 00:16:31.499 "num_base_bdevs_operational": 3, 00:16:31.499 "process": { 00:16:31.499 "type": "rebuild", 00:16:31.499 "target": "spare", 00:16:31.499 "progress": { 00:16:31.499 "blocks": 20480, 00:16:31.499 "percent": 16 00:16:31.499 } 00:16:31.499 }, 00:16:31.499 "base_bdevs_list": [ 00:16:31.499 { 00:16:31.499 "name": "spare", 00:16:31.499 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:31.499 "is_configured": true, 00:16:31.499 "data_offset": 2048, 00:16:31.499 "data_size": 63488 00:16:31.499 }, 00:16:31.499 { 00:16:31.499 "name": "BaseBdev2", 00:16:31.499 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:31.499 "is_configured": true, 00:16:31.499 "data_offset": 2048, 00:16:31.499 "data_size": 63488 00:16:31.499 }, 00:16:31.499 { 00:16:31.499 "name": "BaseBdev3", 00:16:31.499 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:31.499 "is_configured": true, 00:16:31.499 "data_offset": 2048, 00:16:31.499 "data_size": 63488 00:16:31.499 } 00:16:31.499 ] 00:16:31.499 }' 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.499 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.759 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.759 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.759 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.759 10:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.759 [2024-11-20 10:39:34.987777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.759 [2024-11-20 10:39:35.057640] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.759 [2024-11-20 10:39:35.057769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.759 [2024-11-20 10:39:35.057807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.759 [2024-11-20 10:39:35.057832] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.759 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.760 "name": "raid_bdev1", 00:16:31.760 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:31.760 "strip_size_kb": 64, 00:16:31.760 "state": "online", 00:16:31.760 "raid_level": "raid5f", 00:16:31.760 "superblock": true, 00:16:31.760 "num_base_bdevs": 3, 00:16:31.760 "num_base_bdevs_discovered": 2, 00:16:31.760 "num_base_bdevs_operational": 2, 00:16:31.760 "base_bdevs_list": [ 00:16:31.760 { 00:16:31.760 "name": null, 00:16:31.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.760 "is_configured": false, 00:16:31.760 "data_offset": 0, 00:16:31.760 "data_size": 63488 00:16:31.760 }, 00:16:31.760 { 00:16:31.760 "name": "BaseBdev2", 00:16:31.760 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:31.760 "is_configured": true, 00:16:31.760 "data_offset": 2048, 00:16:31.760 "data_size": 63488 00:16:31.760 }, 00:16:31.760 { 00:16:31.760 "name": "BaseBdev3", 00:16:31.760 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:31.760 "is_configured": true, 00:16:31.760 "data_offset": 2048, 00:16:31.760 "data_size": 63488 00:16:31.760 } 00:16:31.760 ] 00:16:31.760 }' 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.760 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.330 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.330 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.330 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.330 [2024-11-20 10:39:35.543974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.330 [2024-11-20 10:39:35.544098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.330 [2024-11-20 10:39:35.544156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:32.330 [2024-11-20 10:39:35.544198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.330 [2024-11-20 10:39:35.544798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.330 [2024-11-20 10:39:35.544874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.330 [2024-11-20 10:39:35.545019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:32.330 [2024-11-20 10:39:35.545067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:32.330 [2024-11-20 10:39:35.545116] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:32.330 [2024-11-20 10:39:35.545197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.330 [2024-11-20 10:39:35.561852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:32.330 spare 00:16:32.330 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.330 10:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:32.330 [2024-11-20 10:39:35.569453] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.269 "name": "raid_bdev1", 00:16:33.269 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:33.269 "strip_size_kb": 64, 00:16:33.269 "state": "online", 00:16:33.269 "raid_level": "raid5f", 00:16:33.269 "superblock": true, 00:16:33.269 "num_base_bdevs": 3, 00:16:33.269 "num_base_bdevs_discovered": 3, 00:16:33.269 "num_base_bdevs_operational": 3, 00:16:33.269 "process": { 00:16:33.269 "type": "rebuild", 00:16:33.269 "target": "spare", 00:16:33.269 "progress": { 00:16:33.269 "blocks": 20480, 00:16:33.269 "percent": 16 00:16:33.269 } 00:16:33.269 }, 00:16:33.269 "base_bdevs_list": [ 00:16:33.269 { 00:16:33.269 "name": "spare", 00:16:33.269 "uuid": "307df4c8-4bc4-59cd-8a8d-eae5c0a07695", 00:16:33.269 "is_configured": true, 00:16:33.269 "data_offset": 2048, 00:16:33.269 "data_size": 63488 00:16:33.269 }, 00:16:33.269 { 00:16:33.269 "name": "BaseBdev2", 00:16:33.269 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:33.269 "is_configured": true, 00:16:33.269 "data_offset": 2048, 00:16:33.269 "data_size": 63488 00:16:33.269 }, 00:16:33.269 { 00:16:33.269 "name": "BaseBdev3", 00:16:33.269 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:33.269 "is_configured": true, 00:16:33.269 "data_offset": 2048, 00:16:33.269 "data_size": 63488 00:16:33.269 } 00:16:33.269 ] 00:16:33.269 }' 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.269 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.269 [2024-11-20 10:39:36.704650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.529 [2024-11-20 10:39:36.777647] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.529 [2024-11-20 10:39:36.777779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.529 [2024-11-20 10:39:36.777836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.529 [2024-11-20 10:39:36.777858] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.529 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.529 "name": "raid_bdev1", 00:16:33.529 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:33.529 "strip_size_kb": 64, 00:16:33.529 "state": "online", 00:16:33.529 "raid_level": "raid5f", 00:16:33.530 "superblock": true, 00:16:33.530 "num_base_bdevs": 3, 00:16:33.530 "num_base_bdevs_discovered": 2, 00:16:33.530 "num_base_bdevs_operational": 2, 00:16:33.530 "base_bdevs_list": [ 00:16:33.530 { 00:16:33.530 "name": null, 00:16:33.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.530 "is_configured": false, 00:16:33.530 "data_offset": 0, 00:16:33.530 "data_size": 63488 00:16:33.530 }, 00:16:33.530 { 00:16:33.530 "name": "BaseBdev2", 00:16:33.530 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:33.530 "is_configured": true, 00:16:33.530 "data_offset": 2048, 00:16:33.530 "data_size": 63488 00:16:33.530 }, 00:16:33.530 { 00:16:33.530 "name": "BaseBdev3", 00:16:33.530 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:33.530 "is_configured": true, 00:16:33.530 "data_offset": 2048, 00:16:33.530 "data_size": 63488 00:16:33.530 } 00:16:33.530 ] 00:16:33.530 }' 00:16:33.530 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.530 10:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.789 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.051 "name": "raid_bdev1", 00:16:34.051 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:34.051 "strip_size_kb": 64, 00:16:34.051 "state": "online", 00:16:34.051 "raid_level": "raid5f", 00:16:34.051 "superblock": true, 00:16:34.051 "num_base_bdevs": 3, 00:16:34.051 "num_base_bdevs_discovered": 2, 00:16:34.051 "num_base_bdevs_operational": 2, 00:16:34.051 "base_bdevs_list": [ 00:16:34.051 { 00:16:34.051 "name": null, 00:16:34.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.051 "is_configured": false, 00:16:34.051 "data_offset": 0, 00:16:34.051 "data_size": 63488 00:16:34.051 }, 00:16:34.051 { 00:16:34.051 "name": "BaseBdev2", 00:16:34.051 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:34.051 "is_configured": true, 00:16:34.051 "data_offset": 2048, 00:16:34.051 "data_size": 63488 00:16:34.051 }, 00:16:34.051 { 00:16:34.051 "name": "BaseBdev3", 00:16:34.051 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:34.051 "is_configured": true, 00:16:34.051 "data_offset": 2048, 00:16:34.051 "data_size": 63488 00:16:34.051 } 00:16:34.051 ] 00:16:34.051 }' 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.051 [2024-11-20 10:39:37.412657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:34.051 [2024-11-20 10:39:37.412750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.051 [2024-11-20 10:39:37.412794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:34.051 [2024-11-20 10:39:37.412804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.051 [2024-11-20 10:39:37.413262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.051 [2024-11-20 10:39:37.413286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:34.051 [2024-11-20 10:39:37.413377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:34.051 [2024-11-20 10:39:37.413394] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:34.051 [2024-11-20 10:39:37.413414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:34.051 [2024-11-20 10:39:37.413424] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:34.051 BaseBdev1 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.051 10:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:34.989 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:34.989 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.990 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.248 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.248 "name": "raid_bdev1", 00:16:35.248 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:35.248 "strip_size_kb": 64, 00:16:35.248 "state": "online", 00:16:35.248 "raid_level": "raid5f", 00:16:35.248 "superblock": true, 00:16:35.248 "num_base_bdevs": 3, 00:16:35.248 "num_base_bdevs_discovered": 2, 00:16:35.248 "num_base_bdevs_operational": 2, 00:16:35.248 "base_bdevs_list": [ 00:16:35.248 { 00:16:35.248 "name": null, 00:16:35.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.248 "is_configured": false, 00:16:35.248 "data_offset": 0, 00:16:35.248 "data_size": 63488 00:16:35.248 }, 00:16:35.248 { 00:16:35.248 "name": "BaseBdev2", 00:16:35.248 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:35.248 "is_configured": true, 00:16:35.248 "data_offset": 2048, 00:16:35.248 "data_size": 63488 00:16:35.248 }, 00:16:35.248 { 00:16:35.248 "name": "BaseBdev3", 00:16:35.248 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:35.248 "is_configured": true, 00:16:35.248 "data_offset": 2048, 00:16:35.248 "data_size": 63488 00:16:35.248 } 00:16:35.248 ] 00:16:35.248 }' 00:16:35.248 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.248 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.507 "name": "raid_bdev1", 00:16:35.507 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:35.507 "strip_size_kb": 64, 00:16:35.507 "state": "online", 00:16:35.507 "raid_level": "raid5f", 00:16:35.507 "superblock": true, 00:16:35.507 "num_base_bdevs": 3, 00:16:35.507 "num_base_bdevs_discovered": 2, 00:16:35.507 "num_base_bdevs_operational": 2, 00:16:35.507 "base_bdevs_list": [ 00:16:35.507 { 00:16:35.507 "name": null, 00:16:35.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.507 "is_configured": false, 00:16:35.507 "data_offset": 0, 00:16:35.507 "data_size": 63488 00:16:35.507 }, 00:16:35.507 { 00:16:35.507 "name": "BaseBdev2", 00:16:35.507 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:35.507 "is_configured": true, 00:16:35.507 "data_offset": 2048, 00:16:35.507 "data_size": 63488 00:16:35.507 }, 00:16:35.507 { 00:16:35.507 "name": "BaseBdev3", 00:16:35.507 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:35.507 "is_configured": true, 00:16:35.507 "data_offset": 2048, 00:16:35.507 "data_size": 63488 00:16:35.507 } 00:16:35.507 ] 00:16:35.507 }' 00:16:35.507 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.767 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.767 10:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.767 [2024-11-20 10:39:39.027588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.767 [2024-11-20 10:39:39.027822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.767 [2024-11-20 10:39:39.027895] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:35.767 request: 00:16:35.767 { 00:16:35.767 "base_bdev": "BaseBdev1", 00:16:35.767 "raid_bdev": "raid_bdev1", 00:16:35.767 "method": "bdev_raid_add_base_bdev", 00:16:35.767 "req_id": 1 00:16:35.767 } 00:16:35.767 Got JSON-RPC error response 00:16:35.767 response: 00:16:35.767 { 00:16:35.767 "code": -22, 00:16:35.767 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:35.767 } 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.767 10:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.704 "name": "raid_bdev1", 00:16:36.704 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:36.704 "strip_size_kb": 64, 00:16:36.704 "state": "online", 00:16:36.704 "raid_level": "raid5f", 00:16:36.704 "superblock": true, 00:16:36.704 "num_base_bdevs": 3, 00:16:36.704 "num_base_bdevs_discovered": 2, 00:16:36.704 "num_base_bdevs_operational": 2, 00:16:36.704 "base_bdevs_list": [ 00:16:36.704 { 00:16:36.704 "name": null, 00:16:36.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.704 "is_configured": false, 00:16:36.704 "data_offset": 0, 00:16:36.704 "data_size": 63488 00:16:36.704 }, 00:16:36.704 { 00:16:36.704 "name": "BaseBdev2", 00:16:36.704 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:36.704 "is_configured": true, 00:16:36.704 "data_offset": 2048, 00:16:36.704 "data_size": 63488 00:16:36.704 }, 00:16:36.704 { 00:16:36.704 "name": "BaseBdev3", 00:16:36.704 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:36.704 "is_configured": true, 00:16:36.704 "data_offset": 2048, 00:16:36.704 "data_size": 63488 00:16:36.704 } 00:16:36.704 ] 00:16:36.704 }' 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.704 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.274 "name": "raid_bdev1", 00:16:37.274 "uuid": "9742dd30-5d5e-4288-b4ca-c0821595d200", 00:16:37.274 "strip_size_kb": 64, 00:16:37.274 "state": "online", 00:16:37.274 "raid_level": "raid5f", 00:16:37.274 "superblock": true, 00:16:37.274 "num_base_bdevs": 3, 00:16:37.274 "num_base_bdevs_discovered": 2, 00:16:37.274 "num_base_bdevs_operational": 2, 00:16:37.274 "base_bdevs_list": [ 00:16:37.274 { 00:16:37.274 "name": null, 00:16:37.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.274 "is_configured": false, 00:16:37.274 "data_offset": 0, 00:16:37.274 "data_size": 63488 00:16:37.274 }, 00:16:37.274 { 00:16:37.274 "name": "BaseBdev2", 00:16:37.274 "uuid": "f2b8aed4-de5a-5026-b048-3c1fbd0b58d2", 00:16:37.274 "is_configured": true, 00:16:37.274 "data_offset": 2048, 00:16:37.274 "data_size": 63488 00:16:37.274 }, 00:16:37.274 { 00:16:37.274 "name": "BaseBdev3", 00:16:37.274 "uuid": "c5a7dcbf-de0f-5f3e-9c01-80b9a714cd26", 00:16:37.274 "is_configured": true, 00:16:37.274 "data_offset": 2048, 00:16:37.274 "data_size": 63488 00:16:37.274 } 00:16:37.274 ] 00:16:37.274 }' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82176 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82176 ']' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82176 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82176 00:16:37.274 killing process with pid 82176 00:16:37.274 Received shutdown signal, test time was about 60.000000 seconds 00:16:37.274 00:16:37.274 Latency(us) 00:16:37.274 [2024-11-20T10:39:40.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.274 [2024-11-20T10:39:40.753Z] =================================================================================================================== 00:16:37.274 [2024-11-20T10:39:40.753Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82176' 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82176 00:16:37.274 [2024-11-20 10:39:40.675612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.274 [2024-11-20 10:39:40.675745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.274 10:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82176 00:16:37.274 [2024-11-20 10:39:40.675812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.274 [2024-11-20 10:39:40.675825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:37.843 [2024-11-20 10:39:41.058458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.781 10:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:38.781 00:16:38.781 real 0m23.316s 00:16:38.781 user 0m29.999s 00:16:38.781 sys 0m2.755s 00:16:38.781 10:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.781 ************************************ 00:16:38.781 END TEST raid5f_rebuild_test_sb 00:16:38.781 ************************************ 00:16:38.781 10:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.781 10:39:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:38.781 10:39:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:38.781 10:39:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:38.781 10:39:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.781 10:39:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.781 ************************************ 00:16:38.781 START TEST raid5f_state_function_test 00:16:38.781 ************************************ 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82934 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82934' 00:16:38.781 Process raid pid: 82934 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82934 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82934 ']' 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.781 10:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.041 [2024-11-20 10:39:42.280365] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:39.041 [2024-11-20 10:39:42.280551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.041 [2024-11-20 10:39:42.455759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.301 [2024-11-20 10:39:42.566983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.301 [2024-11-20 10:39:42.762219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.301 [2024-11-20 10:39:42.762302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.870 [2024-11-20 10:39:43.111576] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.870 [2024-11-20 10:39:43.111674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.870 [2024-11-20 10:39:43.111717] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.870 [2024-11-20 10:39:43.111740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.870 [2024-11-20 10:39:43.111757] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.870 [2024-11-20 10:39:43.111800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.870 [2024-11-20 10:39:43.111824] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:39.870 [2024-11-20 10:39:43.111864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.870 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.870 "name": "Existed_Raid", 00:16:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.870 "strip_size_kb": 64, 00:16:39.870 "state": "configuring", 00:16:39.870 "raid_level": "raid5f", 00:16:39.870 "superblock": false, 00:16:39.870 "num_base_bdevs": 4, 00:16:39.870 "num_base_bdevs_discovered": 0, 00:16:39.870 "num_base_bdevs_operational": 4, 00:16:39.870 "base_bdevs_list": [ 00:16:39.870 { 00:16:39.870 "name": "BaseBdev1", 00:16:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.870 "is_configured": false, 00:16:39.870 "data_offset": 0, 00:16:39.870 "data_size": 0 00:16:39.870 }, 00:16:39.870 { 00:16:39.870 "name": "BaseBdev2", 00:16:39.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.871 "is_configured": false, 00:16:39.871 "data_offset": 0, 00:16:39.871 "data_size": 0 00:16:39.871 }, 00:16:39.871 { 00:16:39.871 "name": "BaseBdev3", 00:16:39.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.871 "is_configured": false, 00:16:39.871 "data_offset": 0, 00:16:39.871 "data_size": 0 00:16:39.871 }, 00:16:39.871 { 00:16:39.871 "name": "BaseBdev4", 00:16:39.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.871 "is_configured": false, 00:16:39.871 "data_offset": 0, 00:16:39.871 "data_size": 0 00:16:39.871 } 00:16:39.871 ] 00:16:39.871 }' 00:16:39.871 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.871 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.129 [2024-11-20 10:39:43.562887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.129 [2024-11-20 10:39:43.562965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.129 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.129 [2024-11-20 10:39:43.574869] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.129 [2024-11-20 10:39:43.574965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.129 [2024-11-20 10:39:43.574992] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.129 [2024-11-20 10:39:43.575014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.129 [2024-11-20 10:39:43.575032] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.129 [2024-11-20 10:39:43.575052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.129 [2024-11-20 10:39:43.575069] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:40.130 [2024-11-20 10:39:43.575089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:40.130 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.130 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.130 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.130 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.389 [2024-11-20 10:39:43.621845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.389 BaseBdev1 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.389 [ 00:16:40.389 { 00:16:40.389 "name": "BaseBdev1", 00:16:40.389 "aliases": [ 00:16:40.389 "b65b9bbf-034b-43cf-91da-71b716623af1" 00:16:40.389 ], 00:16:40.389 "product_name": "Malloc disk", 00:16:40.389 "block_size": 512, 00:16:40.389 "num_blocks": 65536, 00:16:40.389 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:40.389 "assigned_rate_limits": { 00:16:40.389 "rw_ios_per_sec": 0, 00:16:40.389 "rw_mbytes_per_sec": 0, 00:16:40.389 "r_mbytes_per_sec": 0, 00:16:40.389 "w_mbytes_per_sec": 0 00:16:40.389 }, 00:16:40.389 "claimed": true, 00:16:40.389 "claim_type": "exclusive_write", 00:16:40.389 "zoned": false, 00:16:40.389 "supported_io_types": { 00:16:40.389 "read": true, 00:16:40.389 "write": true, 00:16:40.389 "unmap": true, 00:16:40.389 "flush": true, 00:16:40.389 "reset": true, 00:16:40.389 "nvme_admin": false, 00:16:40.389 "nvme_io": false, 00:16:40.389 "nvme_io_md": false, 00:16:40.389 "write_zeroes": true, 00:16:40.389 "zcopy": true, 00:16:40.389 "get_zone_info": false, 00:16:40.389 "zone_management": false, 00:16:40.389 "zone_append": false, 00:16:40.389 "compare": false, 00:16:40.389 "compare_and_write": false, 00:16:40.389 "abort": true, 00:16:40.389 "seek_hole": false, 00:16:40.389 "seek_data": false, 00:16:40.389 "copy": true, 00:16:40.389 "nvme_iov_md": false 00:16:40.389 }, 00:16:40.389 "memory_domains": [ 00:16:40.389 { 00:16:40.389 "dma_device_id": "system", 00:16:40.389 "dma_device_type": 1 00:16:40.389 }, 00:16:40.389 { 00:16:40.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.389 "dma_device_type": 2 00:16:40.389 } 00:16:40.389 ], 00:16:40.389 "driver_specific": {} 00:16:40.389 } 00:16:40.389 ] 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.389 "name": "Existed_Raid", 00:16:40.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.389 "strip_size_kb": 64, 00:16:40.389 "state": "configuring", 00:16:40.389 "raid_level": "raid5f", 00:16:40.389 "superblock": false, 00:16:40.389 "num_base_bdevs": 4, 00:16:40.389 "num_base_bdevs_discovered": 1, 00:16:40.389 "num_base_bdevs_operational": 4, 00:16:40.389 "base_bdevs_list": [ 00:16:40.389 { 00:16:40.389 "name": "BaseBdev1", 00:16:40.389 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:40.389 "is_configured": true, 00:16:40.389 "data_offset": 0, 00:16:40.389 "data_size": 65536 00:16:40.389 }, 00:16:40.389 { 00:16:40.389 "name": "BaseBdev2", 00:16:40.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.389 "is_configured": false, 00:16:40.389 "data_offset": 0, 00:16:40.389 "data_size": 0 00:16:40.389 }, 00:16:40.389 { 00:16:40.389 "name": "BaseBdev3", 00:16:40.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.389 "is_configured": false, 00:16:40.389 "data_offset": 0, 00:16:40.389 "data_size": 0 00:16:40.389 }, 00:16:40.389 { 00:16:40.389 "name": "BaseBdev4", 00:16:40.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.389 "is_configured": false, 00:16:40.389 "data_offset": 0, 00:16:40.389 "data_size": 0 00:16:40.389 } 00:16:40.389 ] 00:16:40.389 }' 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.389 10:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:40.648 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.648 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 [2024-11-20 10:39:44.081102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.648 [2024-11-20 10:39:44.081195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:40.648 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.649 [2024-11-20 10:39:44.093140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.649 [2024-11-20 10:39:44.094945] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.649 [2024-11-20 10:39:44.095030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.649 [2024-11-20 10:39:44.095076] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.649 [2024-11-20 10:39:44.095101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.649 [2024-11-20 10:39:44.095120] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:40.649 [2024-11-20 10:39:44.095140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.649 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.907 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.907 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.907 "name": "Existed_Raid", 00:16:40.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.907 "strip_size_kb": 64, 00:16:40.907 "state": "configuring", 00:16:40.907 "raid_level": "raid5f", 00:16:40.907 "superblock": false, 00:16:40.907 "num_base_bdevs": 4, 00:16:40.907 "num_base_bdevs_discovered": 1, 00:16:40.907 "num_base_bdevs_operational": 4, 00:16:40.907 "base_bdevs_list": [ 00:16:40.907 { 00:16:40.907 "name": "BaseBdev1", 00:16:40.907 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:40.908 "is_configured": true, 00:16:40.908 "data_offset": 0, 00:16:40.908 "data_size": 65536 00:16:40.908 }, 00:16:40.908 { 00:16:40.908 "name": "BaseBdev2", 00:16:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.908 "is_configured": false, 00:16:40.908 "data_offset": 0, 00:16:40.908 "data_size": 0 00:16:40.908 }, 00:16:40.908 { 00:16:40.908 "name": "BaseBdev3", 00:16:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.908 "is_configured": false, 00:16:40.908 "data_offset": 0, 00:16:40.908 "data_size": 0 00:16:40.908 }, 00:16:40.908 { 00:16:40.908 "name": "BaseBdev4", 00:16:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.908 "is_configured": false, 00:16:40.908 "data_offset": 0, 00:16:40.908 "data_size": 0 00:16:40.908 } 00:16:40.908 ] 00:16:40.908 }' 00:16:40.908 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.908 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.167 [2024-11-20 10:39:44.563632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.167 BaseBdev2 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.167 [ 00:16:41.167 { 00:16:41.167 "name": "BaseBdev2", 00:16:41.167 "aliases": [ 00:16:41.167 "ab4615b0-2f0c-4f57-93ab-8e3368240247" 00:16:41.167 ], 00:16:41.167 "product_name": "Malloc disk", 00:16:41.167 "block_size": 512, 00:16:41.167 "num_blocks": 65536, 00:16:41.167 "uuid": "ab4615b0-2f0c-4f57-93ab-8e3368240247", 00:16:41.167 "assigned_rate_limits": { 00:16:41.167 "rw_ios_per_sec": 0, 00:16:41.167 "rw_mbytes_per_sec": 0, 00:16:41.167 "r_mbytes_per_sec": 0, 00:16:41.167 "w_mbytes_per_sec": 0 00:16:41.167 }, 00:16:41.167 "claimed": true, 00:16:41.167 "claim_type": "exclusive_write", 00:16:41.167 "zoned": false, 00:16:41.167 "supported_io_types": { 00:16:41.167 "read": true, 00:16:41.167 "write": true, 00:16:41.167 "unmap": true, 00:16:41.167 "flush": true, 00:16:41.167 "reset": true, 00:16:41.167 "nvme_admin": false, 00:16:41.167 "nvme_io": false, 00:16:41.167 "nvme_io_md": false, 00:16:41.167 "write_zeroes": true, 00:16:41.167 "zcopy": true, 00:16:41.167 "get_zone_info": false, 00:16:41.167 "zone_management": false, 00:16:41.167 "zone_append": false, 00:16:41.167 "compare": false, 00:16:41.167 "compare_and_write": false, 00:16:41.167 "abort": true, 00:16:41.167 "seek_hole": false, 00:16:41.167 "seek_data": false, 00:16:41.167 "copy": true, 00:16:41.167 "nvme_iov_md": false 00:16:41.167 }, 00:16:41.167 "memory_domains": [ 00:16:41.167 { 00:16:41.167 "dma_device_id": "system", 00:16:41.167 "dma_device_type": 1 00:16:41.167 }, 00:16:41.167 { 00:16:41.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.167 "dma_device_type": 2 00:16:41.167 } 00:16:41.167 ], 00:16:41.167 "driver_specific": {} 00:16:41.167 } 00:16:41.167 ] 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.167 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.426 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.426 "name": "Existed_Raid", 00:16:41.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.426 "strip_size_kb": 64, 00:16:41.426 "state": "configuring", 00:16:41.426 "raid_level": "raid5f", 00:16:41.426 "superblock": false, 00:16:41.426 "num_base_bdevs": 4, 00:16:41.426 "num_base_bdevs_discovered": 2, 00:16:41.426 "num_base_bdevs_operational": 4, 00:16:41.426 "base_bdevs_list": [ 00:16:41.426 { 00:16:41.426 "name": "BaseBdev1", 00:16:41.426 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:41.426 "is_configured": true, 00:16:41.426 "data_offset": 0, 00:16:41.426 "data_size": 65536 00:16:41.426 }, 00:16:41.426 { 00:16:41.426 "name": "BaseBdev2", 00:16:41.426 "uuid": "ab4615b0-2f0c-4f57-93ab-8e3368240247", 00:16:41.426 "is_configured": true, 00:16:41.426 "data_offset": 0, 00:16:41.426 "data_size": 65536 00:16:41.426 }, 00:16:41.426 { 00:16:41.426 "name": "BaseBdev3", 00:16:41.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.426 "is_configured": false, 00:16:41.426 "data_offset": 0, 00:16:41.426 "data_size": 0 00:16:41.426 }, 00:16:41.426 { 00:16:41.426 "name": "BaseBdev4", 00:16:41.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.426 "is_configured": false, 00:16:41.426 "data_offset": 0, 00:16:41.426 "data_size": 0 00:16:41.426 } 00:16:41.426 ] 00:16:41.426 }' 00:16:41.426 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.426 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.686 10:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:41.686 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.686 10:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.686 [2024-11-20 10:39:45.047587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.686 BaseBdev3 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.686 [ 00:16:41.686 { 00:16:41.686 "name": "BaseBdev3", 00:16:41.686 "aliases": [ 00:16:41.686 "ef677c08-ec8f-41fd-872b-8b639a8e0ac6" 00:16:41.686 ], 00:16:41.686 "product_name": "Malloc disk", 00:16:41.686 "block_size": 512, 00:16:41.686 "num_blocks": 65536, 00:16:41.686 "uuid": "ef677c08-ec8f-41fd-872b-8b639a8e0ac6", 00:16:41.686 "assigned_rate_limits": { 00:16:41.686 "rw_ios_per_sec": 0, 00:16:41.686 "rw_mbytes_per_sec": 0, 00:16:41.686 "r_mbytes_per_sec": 0, 00:16:41.686 "w_mbytes_per_sec": 0 00:16:41.686 }, 00:16:41.686 "claimed": true, 00:16:41.686 "claim_type": "exclusive_write", 00:16:41.686 "zoned": false, 00:16:41.686 "supported_io_types": { 00:16:41.686 "read": true, 00:16:41.686 "write": true, 00:16:41.686 "unmap": true, 00:16:41.686 "flush": true, 00:16:41.686 "reset": true, 00:16:41.686 "nvme_admin": false, 00:16:41.686 "nvme_io": false, 00:16:41.686 "nvme_io_md": false, 00:16:41.686 "write_zeroes": true, 00:16:41.686 "zcopy": true, 00:16:41.686 "get_zone_info": false, 00:16:41.686 "zone_management": false, 00:16:41.686 "zone_append": false, 00:16:41.686 "compare": false, 00:16:41.686 "compare_and_write": false, 00:16:41.686 "abort": true, 00:16:41.686 "seek_hole": false, 00:16:41.686 "seek_data": false, 00:16:41.686 "copy": true, 00:16:41.686 "nvme_iov_md": false 00:16:41.686 }, 00:16:41.686 "memory_domains": [ 00:16:41.686 { 00:16:41.686 "dma_device_id": "system", 00:16:41.686 "dma_device_type": 1 00:16:41.686 }, 00:16:41.686 { 00:16:41.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.686 "dma_device_type": 2 00:16:41.686 } 00:16:41.686 ], 00:16:41.686 "driver_specific": {} 00:16:41.686 } 00:16:41.686 ] 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.686 "name": "Existed_Raid", 00:16:41.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.686 "strip_size_kb": 64, 00:16:41.686 "state": "configuring", 00:16:41.686 "raid_level": "raid5f", 00:16:41.686 "superblock": false, 00:16:41.686 "num_base_bdevs": 4, 00:16:41.686 "num_base_bdevs_discovered": 3, 00:16:41.686 "num_base_bdevs_operational": 4, 00:16:41.686 "base_bdevs_list": [ 00:16:41.686 { 00:16:41.686 "name": "BaseBdev1", 00:16:41.686 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:41.686 "is_configured": true, 00:16:41.686 "data_offset": 0, 00:16:41.686 "data_size": 65536 00:16:41.686 }, 00:16:41.686 { 00:16:41.686 "name": "BaseBdev2", 00:16:41.686 "uuid": "ab4615b0-2f0c-4f57-93ab-8e3368240247", 00:16:41.686 "is_configured": true, 00:16:41.686 "data_offset": 0, 00:16:41.686 "data_size": 65536 00:16:41.686 }, 00:16:41.686 { 00:16:41.686 "name": "BaseBdev3", 00:16:41.686 "uuid": "ef677c08-ec8f-41fd-872b-8b639a8e0ac6", 00:16:41.686 "is_configured": true, 00:16:41.686 "data_offset": 0, 00:16:41.686 "data_size": 65536 00:16:41.686 }, 00:16:41.686 { 00:16:41.686 "name": "BaseBdev4", 00:16:41.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.686 "is_configured": false, 00:16:41.686 "data_offset": 0, 00:16:41.686 "data_size": 0 00:16:41.686 } 00:16:41.686 ] 00:16:41.686 }' 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.686 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.254 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.255 [2024-11-20 10:39:45.554976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.255 [2024-11-20 10:39:45.555130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:42.255 [2024-11-20 10:39:45.555158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:42.255 [2024-11-20 10:39:45.555488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:42.255 [2024-11-20 10:39:45.562158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:42.255 [2024-11-20 10:39:45.562217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:42.255 [2024-11-20 10:39:45.562554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.255 BaseBdev4 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.255 [ 00:16:42.255 { 00:16:42.255 "name": "BaseBdev4", 00:16:42.255 "aliases": [ 00:16:42.255 "533dd10d-148e-42e7-8e52-fb0bce90de27" 00:16:42.255 ], 00:16:42.255 "product_name": "Malloc disk", 00:16:42.255 "block_size": 512, 00:16:42.255 "num_blocks": 65536, 00:16:42.255 "uuid": "533dd10d-148e-42e7-8e52-fb0bce90de27", 00:16:42.255 "assigned_rate_limits": { 00:16:42.255 "rw_ios_per_sec": 0, 00:16:42.255 "rw_mbytes_per_sec": 0, 00:16:42.255 "r_mbytes_per_sec": 0, 00:16:42.255 "w_mbytes_per_sec": 0 00:16:42.255 }, 00:16:42.255 "claimed": true, 00:16:42.255 "claim_type": "exclusive_write", 00:16:42.255 "zoned": false, 00:16:42.255 "supported_io_types": { 00:16:42.255 "read": true, 00:16:42.255 "write": true, 00:16:42.255 "unmap": true, 00:16:42.255 "flush": true, 00:16:42.255 "reset": true, 00:16:42.255 "nvme_admin": false, 00:16:42.255 "nvme_io": false, 00:16:42.255 "nvme_io_md": false, 00:16:42.255 "write_zeroes": true, 00:16:42.255 "zcopy": true, 00:16:42.255 "get_zone_info": false, 00:16:42.255 "zone_management": false, 00:16:42.255 "zone_append": false, 00:16:42.255 "compare": false, 00:16:42.255 "compare_and_write": false, 00:16:42.255 "abort": true, 00:16:42.255 "seek_hole": false, 00:16:42.255 "seek_data": false, 00:16:42.255 "copy": true, 00:16:42.255 "nvme_iov_md": false 00:16:42.255 }, 00:16:42.255 "memory_domains": [ 00:16:42.255 { 00:16:42.255 "dma_device_id": "system", 00:16:42.255 "dma_device_type": 1 00:16:42.255 }, 00:16:42.255 { 00:16:42.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.255 "dma_device_type": 2 00:16:42.255 } 00:16:42.255 ], 00:16:42.255 "driver_specific": {} 00:16:42.255 } 00:16:42.255 ] 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.255 "name": "Existed_Raid", 00:16:42.255 "uuid": "d1a377bf-6b3e-4aab-99c8-da933c98a277", 00:16:42.255 "strip_size_kb": 64, 00:16:42.255 "state": "online", 00:16:42.255 "raid_level": "raid5f", 00:16:42.255 "superblock": false, 00:16:42.255 "num_base_bdevs": 4, 00:16:42.255 "num_base_bdevs_discovered": 4, 00:16:42.255 "num_base_bdevs_operational": 4, 00:16:42.255 "base_bdevs_list": [ 00:16:42.255 { 00:16:42.255 "name": "BaseBdev1", 00:16:42.255 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:42.255 "is_configured": true, 00:16:42.255 "data_offset": 0, 00:16:42.255 "data_size": 65536 00:16:42.255 }, 00:16:42.255 { 00:16:42.255 "name": "BaseBdev2", 00:16:42.255 "uuid": "ab4615b0-2f0c-4f57-93ab-8e3368240247", 00:16:42.255 "is_configured": true, 00:16:42.255 "data_offset": 0, 00:16:42.255 "data_size": 65536 00:16:42.255 }, 00:16:42.255 { 00:16:42.255 "name": "BaseBdev3", 00:16:42.255 "uuid": "ef677c08-ec8f-41fd-872b-8b639a8e0ac6", 00:16:42.255 "is_configured": true, 00:16:42.255 "data_offset": 0, 00:16:42.255 "data_size": 65536 00:16:42.255 }, 00:16:42.255 { 00:16:42.255 "name": "BaseBdev4", 00:16:42.255 "uuid": "533dd10d-148e-42e7-8e52-fb0bce90de27", 00:16:42.255 "is_configured": true, 00:16:42.255 "data_offset": 0, 00:16:42.255 "data_size": 65536 00:16:42.255 } 00:16:42.255 ] 00:16:42.255 }' 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.255 10:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.823 [2024-11-20 10:39:46.030021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.823 "name": "Existed_Raid", 00:16:42.823 "aliases": [ 00:16:42.823 "d1a377bf-6b3e-4aab-99c8-da933c98a277" 00:16:42.823 ], 00:16:42.823 "product_name": "Raid Volume", 00:16:42.823 "block_size": 512, 00:16:42.823 "num_blocks": 196608, 00:16:42.823 "uuid": "d1a377bf-6b3e-4aab-99c8-da933c98a277", 00:16:42.823 "assigned_rate_limits": { 00:16:42.823 "rw_ios_per_sec": 0, 00:16:42.823 "rw_mbytes_per_sec": 0, 00:16:42.823 "r_mbytes_per_sec": 0, 00:16:42.823 "w_mbytes_per_sec": 0 00:16:42.823 }, 00:16:42.823 "claimed": false, 00:16:42.823 "zoned": false, 00:16:42.823 "supported_io_types": { 00:16:42.823 "read": true, 00:16:42.823 "write": true, 00:16:42.823 "unmap": false, 00:16:42.823 "flush": false, 00:16:42.823 "reset": true, 00:16:42.823 "nvme_admin": false, 00:16:42.823 "nvme_io": false, 00:16:42.823 "nvme_io_md": false, 00:16:42.823 "write_zeroes": true, 00:16:42.823 "zcopy": false, 00:16:42.823 "get_zone_info": false, 00:16:42.823 "zone_management": false, 00:16:42.823 "zone_append": false, 00:16:42.823 "compare": false, 00:16:42.823 "compare_and_write": false, 00:16:42.823 "abort": false, 00:16:42.823 "seek_hole": false, 00:16:42.823 "seek_data": false, 00:16:42.823 "copy": false, 00:16:42.823 "nvme_iov_md": false 00:16:42.823 }, 00:16:42.823 "driver_specific": { 00:16:42.823 "raid": { 00:16:42.823 "uuid": "d1a377bf-6b3e-4aab-99c8-da933c98a277", 00:16:42.823 "strip_size_kb": 64, 00:16:42.823 "state": "online", 00:16:42.823 "raid_level": "raid5f", 00:16:42.823 "superblock": false, 00:16:42.823 "num_base_bdevs": 4, 00:16:42.823 "num_base_bdevs_discovered": 4, 00:16:42.823 "num_base_bdevs_operational": 4, 00:16:42.823 "base_bdevs_list": [ 00:16:42.823 { 00:16:42.823 "name": "BaseBdev1", 00:16:42.823 "uuid": "b65b9bbf-034b-43cf-91da-71b716623af1", 00:16:42.823 "is_configured": true, 00:16:42.823 "data_offset": 0, 00:16:42.823 "data_size": 65536 00:16:42.823 }, 00:16:42.823 { 00:16:42.823 "name": "BaseBdev2", 00:16:42.823 "uuid": "ab4615b0-2f0c-4f57-93ab-8e3368240247", 00:16:42.823 "is_configured": true, 00:16:42.823 "data_offset": 0, 00:16:42.823 "data_size": 65536 00:16:42.823 }, 00:16:42.823 { 00:16:42.823 "name": "BaseBdev3", 00:16:42.823 "uuid": "ef677c08-ec8f-41fd-872b-8b639a8e0ac6", 00:16:42.823 "is_configured": true, 00:16:42.823 "data_offset": 0, 00:16:42.823 "data_size": 65536 00:16:42.823 }, 00:16:42.823 { 00:16:42.823 "name": "BaseBdev4", 00:16:42.823 "uuid": "533dd10d-148e-42e7-8e52-fb0bce90de27", 00:16:42.823 "is_configured": true, 00:16:42.823 "data_offset": 0, 00:16:42.823 "data_size": 65536 00:16:42.823 } 00:16:42.823 ] 00:16:42.823 } 00:16:42.823 } 00:16:42.823 }' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:42.823 BaseBdev2 00:16:42.823 BaseBdev3 00:16:42.823 BaseBdev4' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.824 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.083 [2024-11-20 10:39:46.357275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.083 "name": "Existed_Raid", 00:16:43.083 "uuid": "d1a377bf-6b3e-4aab-99c8-da933c98a277", 00:16:43.083 "strip_size_kb": 64, 00:16:43.083 "state": "online", 00:16:43.083 "raid_level": "raid5f", 00:16:43.083 "superblock": false, 00:16:43.083 "num_base_bdevs": 4, 00:16:43.083 "num_base_bdevs_discovered": 3, 00:16:43.083 "num_base_bdevs_operational": 3, 00:16:43.083 "base_bdevs_list": [ 00:16:43.083 { 00:16:43.083 "name": null, 00:16:43.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.083 "is_configured": false, 00:16:43.083 "data_offset": 0, 00:16:43.083 "data_size": 65536 00:16:43.083 }, 00:16:43.083 { 00:16:43.083 "name": "BaseBdev2", 00:16:43.083 "uuid": "ab4615b0-2f0c-4f57-93ab-8e3368240247", 00:16:43.083 "is_configured": true, 00:16:43.083 "data_offset": 0, 00:16:43.083 "data_size": 65536 00:16:43.083 }, 00:16:43.083 { 00:16:43.083 "name": "BaseBdev3", 00:16:43.083 "uuid": "ef677c08-ec8f-41fd-872b-8b639a8e0ac6", 00:16:43.083 "is_configured": true, 00:16:43.083 "data_offset": 0, 00:16:43.083 "data_size": 65536 00:16:43.083 }, 00:16:43.083 { 00:16:43.083 "name": "BaseBdev4", 00:16:43.083 "uuid": "533dd10d-148e-42e7-8e52-fb0bce90de27", 00:16:43.083 "is_configured": true, 00:16:43.083 "data_offset": 0, 00:16:43.083 "data_size": 65536 00:16:43.083 } 00:16:43.083 ] 00:16:43.083 }' 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.083 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 10:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 [2024-11-20 10:39:47.004544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.651 [2024-11-20 10:39:47.004683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.651 [2024-11-20 10:39:47.097352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:43.651 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.911 [2024-11-20 10:39:47.157254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.911 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.911 [2024-11-20 10:39:47.296319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:43.911 [2024-11-20 10:39:47.296431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 BaseBdev2 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 [ 00:16:44.169 { 00:16:44.169 "name": "BaseBdev2", 00:16:44.169 "aliases": [ 00:16:44.169 "7ac3fd3a-be7a-403a-962f-b3c8b1942147" 00:16:44.169 ], 00:16:44.169 "product_name": "Malloc disk", 00:16:44.169 "block_size": 512, 00:16:44.169 "num_blocks": 65536, 00:16:44.169 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:44.169 "assigned_rate_limits": { 00:16:44.169 "rw_ios_per_sec": 0, 00:16:44.169 "rw_mbytes_per_sec": 0, 00:16:44.169 "r_mbytes_per_sec": 0, 00:16:44.169 "w_mbytes_per_sec": 0 00:16:44.169 }, 00:16:44.169 "claimed": false, 00:16:44.169 "zoned": false, 00:16:44.169 "supported_io_types": { 00:16:44.169 "read": true, 00:16:44.169 "write": true, 00:16:44.169 "unmap": true, 00:16:44.169 "flush": true, 00:16:44.169 "reset": true, 00:16:44.169 "nvme_admin": false, 00:16:44.169 "nvme_io": false, 00:16:44.169 "nvme_io_md": false, 00:16:44.169 "write_zeroes": true, 00:16:44.169 "zcopy": true, 00:16:44.169 "get_zone_info": false, 00:16:44.169 "zone_management": false, 00:16:44.169 "zone_append": false, 00:16:44.169 "compare": false, 00:16:44.169 "compare_and_write": false, 00:16:44.169 "abort": true, 00:16:44.169 "seek_hole": false, 00:16:44.169 "seek_data": false, 00:16:44.169 "copy": true, 00:16:44.169 "nvme_iov_md": false 00:16:44.169 }, 00:16:44.169 "memory_domains": [ 00:16:44.169 { 00:16:44.169 "dma_device_id": "system", 00:16:44.169 "dma_device_type": 1 00:16:44.169 }, 00:16:44.169 { 00:16:44.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.169 "dma_device_type": 2 00:16:44.169 } 00:16:44.169 ], 00:16:44.169 "driver_specific": {} 00:16:44.169 } 00:16:44.169 ] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 BaseBdev3 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 [ 00:16:44.169 { 00:16:44.169 "name": "BaseBdev3", 00:16:44.169 "aliases": [ 00:16:44.169 "3208ff9b-58d5-4c85-847b-13932e0585ee" 00:16:44.169 ], 00:16:44.169 "product_name": "Malloc disk", 00:16:44.169 "block_size": 512, 00:16:44.169 "num_blocks": 65536, 00:16:44.169 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:44.169 "assigned_rate_limits": { 00:16:44.169 "rw_ios_per_sec": 0, 00:16:44.169 "rw_mbytes_per_sec": 0, 00:16:44.169 "r_mbytes_per_sec": 0, 00:16:44.169 "w_mbytes_per_sec": 0 00:16:44.169 }, 00:16:44.169 "claimed": false, 00:16:44.169 "zoned": false, 00:16:44.169 "supported_io_types": { 00:16:44.169 "read": true, 00:16:44.169 "write": true, 00:16:44.169 "unmap": true, 00:16:44.169 "flush": true, 00:16:44.169 "reset": true, 00:16:44.169 "nvme_admin": false, 00:16:44.169 "nvme_io": false, 00:16:44.169 "nvme_io_md": false, 00:16:44.169 "write_zeroes": true, 00:16:44.169 "zcopy": true, 00:16:44.169 "get_zone_info": false, 00:16:44.169 "zone_management": false, 00:16:44.169 "zone_append": false, 00:16:44.169 "compare": false, 00:16:44.169 "compare_and_write": false, 00:16:44.169 "abort": true, 00:16:44.169 "seek_hole": false, 00:16:44.169 "seek_data": false, 00:16:44.169 "copy": true, 00:16:44.169 "nvme_iov_md": false 00:16:44.169 }, 00:16:44.169 "memory_domains": [ 00:16:44.169 { 00:16:44.169 "dma_device_id": "system", 00:16:44.169 "dma_device_type": 1 00:16:44.169 }, 00:16:44.169 { 00:16:44.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.169 "dma_device_type": 2 00:16:44.169 } 00:16:44.169 ], 00:16:44.169 "driver_specific": {} 00:16:44.169 } 00:16:44.169 ] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.169 BaseBdev4 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.169 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.428 [ 00:16:44.428 { 00:16:44.428 "name": "BaseBdev4", 00:16:44.428 "aliases": [ 00:16:44.428 "8c7f3869-9f59-4bec-a2a3-9c8575b2a297" 00:16:44.428 ], 00:16:44.428 "product_name": "Malloc disk", 00:16:44.428 "block_size": 512, 00:16:44.428 "num_blocks": 65536, 00:16:44.428 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:44.428 "assigned_rate_limits": { 00:16:44.428 "rw_ios_per_sec": 0, 00:16:44.428 "rw_mbytes_per_sec": 0, 00:16:44.428 "r_mbytes_per_sec": 0, 00:16:44.428 "w_mbytes_per_sec": 0 00:16:44.428 }, 00:16:44.428 "claimed": false, 00:16:44.428 "zoned": false, 00:16:44.428 "supported_io_types": { 00:16:44.428 "read": true, 00:16:44.428 "write": true, 00:16:44.428 "unmap": true, 00:16:44.428 "flush": true, 00:16:44.428 "reset": true, 00:16:44.428 "nvme_admin": false, 00:16:44.428 "nvme_io": false, 00:16:44.428 "nvme_io_md": false, 00:16:44.428 "write_zeroes": true, 00:16:44.428 "zcopy": true, 00:16:44.428 "get_zone_info": false, 00:16:44.428 "zone_management": false, 00:16:44.428 "zone_append": false, 00:16:44.428 "compare": false, 00:16:44.428 "compare_and_write": false, 00:16:44.428 "abort": true, 00:16:44.428 "seek_hole": false, 00:16:44.428 "seek_data": false, 00:16:44.428 "copy": true, 00:16:44.428 "nvme_iov_md": false 00:16:44.428 }, 00:16:44.428 "memory_domains": [ 00:16:44.428 { 00:16:44.428 "dma_device_id": "system", 00:16:44.428 "dma_device_type": 1 00:16:44.428 }, 00:16:44.428 { 00:16:44.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.428 "dma_device_type": 2 00:16:44.428 } 00:16:44.428 ], 00:16:44.428 "driver_specific": {} 00:16:44.428 } 00:16:44.428 ] 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.428 [2024-11-20 10:39:47.689996] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.428 [2024-11-20 10:39:47.690096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.428 [2024-11-20 10:39:47.690160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.428 [2024-11-20 10:39:47.692122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.428 [2024-11-20 10:39:47.692221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.428 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.429 "name": "Existed_Raid", 00:16:44.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.429 "strip_size_kb": 64, 00:16:44.429 "state": "configuring", 00:16:44.429 "raid_level": "raid5f", 00:16:44.429 "superblock": false, 00:16:44.429 "num_base_bdevs": 4, 00:16:44.429 "num_base_bdevs_discovered": 3, 00:16:44.429 "num_base_bdevs_operational": 4, 00:16:44.429 "base_bdevs_list": [ 00:16:44.429 { 00:16:44.429 "name": "BaseBdev1", 00:16:44.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.429 "is_configured": false, 00:16:44.429 "data_offset": 0, 00:16:44.429 "data_size": 0 00:16:44.429 }, 00:16:44.429 { 00:16:44.429 "name": "BaseBdev2", 00:16:44.429 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:44.429 "is_configured": true, 00:16:44.429 "data_offset": 0, 00:16:44.429 "data_size": 65536 00:16:44.429 }, 00:16:44.429 { 00:16:44.429 "name": "BaseBdev3", 00:16:44.429 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:44.429 "is_configured": true, 00:16:44.429 "data_offset": 0, 00:16:44.429 "data_size": 65536 00:16:44.429 }, 00:16:44.429 { 00:16:44.429 "name": "BaseBdev4", 00:16:44.429 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:44.429 "is_configured": true, 00:16:44.429 "data_offset": 0, 00:16:44.429 "data_size": 65536 00:16:44.429 } 00:16:44.429 ] 00:16:44.429 }' 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.429 10:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.996 [2024-11-20 10:39:48.185177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.996 "name": "Existed_Raid", 00:16:44.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.996 "strip_size_kb": 64, 00:16:44.996 "state": "configuring", 00:16:44.996 "raid_level": "raid5f", 00:16:44.996 "superblock": false, 00:16:44.996 "num_base_bdevs": 4, 00:16:44.996 "num_base_bdevs_discovered": 2, 00:16:44.996 "num_base_bdevs_operational": 4, 00:16:44.996 "base_bdevs_list": [ 00:16:44.996 { 00:16:44.996 "name": "BaseBdev1", 00:16:44.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.996 "is_configured": false, 00:16:44.996 "data_offset": 0, 00:16:44.996 "data_size": 0 00:16:44.996 }, 00:16:44.996 { 00:16:44.996 "name": null, 00:16:44.996 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:44.996 "is_configured": false, 00:16:44.996 "data_offset": 0, 00:16:44.996 "data_size": 65536 00:16:44.996 }, 00:16:44.996 { 00:16:44.996 "name": "BaseBdev3", 00:16:44.996 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:44.996 "is_configured": true, 00:16:44.996 "data_offset": 0, 00:16:44.996 "data_size": 65536 00:16:44.996 }, 00:16:44.996 { 00:16:44.996 "name": "BaseBdev4", 00:16:44.996 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:44.996 "is_configured": true, 00:16:44.996 "data_offset": 0, 00:16:44.996 "data_size": 65536 00:16:44.996 } 00:16:44.996 ] 00:16:44.996 }' 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.996 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.257 [2024-11-20 10:39:48.700163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.257 BaseBdev1 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.257 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.257 [ 00:16:45.257 { 00:16:45.257 "name": "BaseBdev1", 00:16:45.257 "aliases": [ 00:16:45.257 "fe75d063-29a1-4ba9-870c-1ee271664060" 00:16:45.257 ], 00:16:45.257 "product_name": "Malloc disk", 00:16:45.257 "block_size": 512, 00:16:45.257 "num_blocks": 65536, 00:16:45.257 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:45.257 "assigned_rate_limits": { 00:16:45.257 "rw_ios_per_sec": 0, 00:16:45.257 "rw_mbytes_per_sec": 0, 00:16:45.257 "r_mbytes_per_sec": 0, 00:16:45.257 "w_mbytes_per_sec": 0 00:16:45.257 }, 00:16:45.257 "claimed": true, 00:16:45.257 "claim_type": "exclusive_write", 00:16:45.257 "zoned": false, 00:16:45.257 "supported_io_types": { 00:16:45.257 "read": true, 00:16:45.257 "write": true, 00:16:45.257 "unmap": true, 00:16:45.257 "flush": true, 00:16:45.257 "reset": true, 00:16:45.257 "nvme_admin": false, 00:16:45.257 "nvme_io": false, 00:16:45.257 "nvme_io_md": false, 00:16:45.257 "write_zeroes": true, 00:16:45.257 "zcopy": true, 00:16:45.257 "get_zone_info": false, 00:16:45.517 "zone_management": false, 00:16:45.517 "zone_append": false, 00:16:45.517 "compare": false, 00:16:45.517 "compare_and_write": false, 00:16:45.517 "abort": true, 00:16:45.517 "seek_hole": false, 00:16:45.517 "seek_data": false, 00:16:45.517 "copy": true, 00:16:45.517 "nvme_iov_md": false 00:16:45.517 }, 00:16:45.517 "memory_domains": [ 00:16:45.517 { 00:16:45.517 "dma_device_id": "system", 00:16:45.517 "dma_device_type": 1 00:16:45.517 }, 00:16:45.517 { 00:16:45.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.517 "dma_device_type": 2 00:16:45.517 } 00:16:45.517 ], 00:16:45.517 "driver_specific": {} 00:16:45.517 } 00:16:45.517 ] 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.517 "name": "Existed_Raid", 00:16:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.517 "strip_size_kb": 64, 00:16:45.517 "state": "configuring", 00:16:45.517 "raid_level": "raid5f", 00:16:45.517 "superblock": false, 00:16:45.517 "num_base_bdevs": 4, 00:16:45.517 "num_base_bdevs_discovered": 3, 00:16:45.517 "num_base_bdevs_operational": 4, 00:16:45.517 "base_bdevs_list": [ 00:16:45.517 { 00:16:45.517 "name": "BaseBdev1", 00:16:45.517 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:45.517 "is_configured": true, 00:16:45.517 "data_offset": 0, 00:16:45.517 "data_size": 65536 00:16:45.517 }, 00:16:45.517 { 00:16:45.517 "name": null, 00:16:45.517 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:45.517 "is_configured": false, 00:16:45.517 "data_offset": 0, 00:16:45.517 "data_size": 65536 00:16:45.517 }, 00:16:45.517 { 00:16:45.517 "name": "BaseBdev3", 00:16:45.517 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:45.517 "is_configured": true, 00:16:45.517 "data_offset": 0, 00:16:45.517 "data_size": 65536 00:16:45.517 }, 00:16:45.517 { 00:16:45.517 "name": "BaseBdev4", 00:16:45.517 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:45.517 "is_configured": true, 00:16:45.517 "data_offset": 0, 00:16:45.517 "data_size": 65536 00:16:45.517 } 00:16:45.517 ] 00:16:45.517 }' 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.517 10:39:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.776 [2024-11-20 10:39:49.223372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.776 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.035 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.035 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.035 "name": "Existed_Raid", 00:16:46.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.035 "strip_size_kb": 64, 00:16:46.035 "state": "configuring", 00:16:46.035 "raid_level": "raid5f", 00:16:46.035 "superblock": false, 00:16:46.035 "num_base_bdevs": 4, 00:16:46.035 "num_base_bdevs_discovered": 2, 00:16:46.035 "num_base_bdevs_operational": 4, 00:16:46.035 "base_bdevs_list": [ 00:16:46.035 { 00:16:46.035 "name": "BaseBdev1", 00:16:46.035 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:46.035 "is_configured": true, 00:16:46.035 "data_offset": 0, 00:16:46.035 "data_size": 65536 00:16:46.035 }, 00:16:46.035 { 00:16:46.035 "name": null, 00:16:46.035 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:46.035 "is_configured": false, 00:16:46.035 "data_offset": 0, 00:16:46.035 "data_size": 65536 00:16:46.035 }, 00:16:46.035 { 00:16:46.035 "name": null, 00:16:46.035 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:46.035 "is_configured": false, 00:16:46.035 "data_offset": 0, 00:16:46.035 "data_size": 65536 00:16:46.035 }, 00:16:46.035 { 00:16:46.035 "name": "BaseBdev4", 00:16:46.035 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:46.035 "is_configured": true, 00:16:46.035 "data_offset": 0, 00:16:46.035 "data_size": 65536 00:16:46.035 } 00:16:46.035 ] 00:16:46.035 }' 00:16:46.035 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.035 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.294 [2024-11-20 10:39:49.702542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.294 "name": "Existed_Raid", 00:16:46.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.294 "strip_size_kb": 64, 00:16:46.294 "state": "configuring", 00:16:46.294 "raid_level": "raid5f", 00:16:46.294 "superblock": false, 00:16:46.294 "num_base_bdevs": 4, 00:16:46.294 "num_base_bdevs_discovered": 3, 00:16:46.294 "num_base_bdevs_operational": 4, 00:16:46.294 "base_bdevs_list": [ 00:16:46.294 { 00:16:46.294 "name": "BaseBdev1", 00:16:46.294 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:46.294 "is_configured": true, 00:16:46.294 "data_offset": 0, 00:16:46.294 "data_size": 65536 00:16:46.294 }, 00:16:46.294 { 00:16:46.294 "name": null, 00:16:46.294 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:46.294 "is_configured": false, 00:16:46.294 "data_offset": 0, 00:16:46.294 "data_size": 65536 00:16:46.294 }, 00:16:46.294 { 00:16:46.294 "name": "BaseBdev3", 00:16:46.294 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:46.294 "is_configured": true, 00:16:46.294 "data_offset": 0, 00:16:46.294 "data_size": 65536 00:16:46.294 }, 00:16:46.294 { 00:16:46.294 "name": "BaseBdev4", 00:16:46.294 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:46.294 "is_configured": true, 00:16:46.294 "data_offset": 0, 00:16:46.294 "data_size": 65536 00:16:46.294 } 00:16:46.294 ] 00:16:46.294 }' 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.294 10:39:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.861 [2024-11-20 10:39:50.205723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.861 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.120 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.120 "name": "Existed_Raid", 00:16:47.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.120 "strip_size_kb": 64, 00:16:47.120 "state": "configuring", 00:16:47.120 "raid_level": "raid5f", 00:16:47.120 "superblock": false, 00:16:47.120 "num_base_bdevs": 4, 00:16:47.120 "num_base_bdevs_discovered": 2, 00:16:47.120 "num_base_bdevs_operational": 4, 00:16:47.120 "base_bdevs_list": [ 00:16:47.120 { 00:16:47.120 "name": null, 00:16:47.120 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:47.120 "is_configured": false, 00:16:47.120 "data_offset": 0, 00:16:47.120 "data_size": 65536 00:16:47.120 }, 00:16:47.120 { 00:16:47.120 "name": null, 00:16:47.120 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:47.120 "is_configured": false, 00:16:47.120 "data_offset": 0, 00:16:47.120 "data_size": 65536 00:16:47.120 }, 00:16:47.120 { 00:16:47.120 "name": "BaseBdev3", 00:16:47.120 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:47.120 "is_configured": true, 00:16:47.120 "data_offset": 0, 00:16:47.121 "data_size": 65536 00:16:47.121 }, 00:16:47.121 { 00:16:47.121 "name": "BaseBdev4", 00:16:47.121 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:47.121 "is_configured": true, 00:16:47.121 "data_offset": 0, 00:16:47.121 "data_size": 65536 00:16:47.121 } 00:16:47.121 ] 00:16:47.121 }' 00:16:47.121 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.121 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.380 [2024-11-20 10:39:50.771663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.380 "name": "Existed_Raid", 00:16:47.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.380 "strip_size_kb": 64, 00:16:47.380 "state": "configuring", 00:16:47.380 "raid_level": "raid5f", 00:16:47.380 "superblock": false, 00:16:47.380 "num_base_bdevs": 4, 00:16:47.380 "num_base_bdevs_discovered": 3, 00:16:47.380 "num_base_bdevs_operational": 4, 00:16:47.380 "base_bdevs_list": [ 00:16:47.380 { 00:16:47.380 "name": null, 00:16:47.380 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:47.380 "is_configured": false, 00:16:47.380 "data_offset": 0, 00:16:47.380 "data_size": 65536 00:16:47.380 }, 00:16:47.380 { 00:16:47.380 "name": "BaseBdev2", 00:16:47.380 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:47.380 "is_configured": true, 00:16:47.380 "data_offset": 0, 00:16:47.380 "data_size": 65536 00:16:47.380 }, 00:16:47.380 { 00:16:47.380 "name": "BaseBdev3", 00:16:47.380 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:47.380 "is_configured": true, 00:16:47.380 "data_offset": 0, 00:16:47.380 "data_size": 65536 00:16:47.380 }, 00:16:47.380 { 00:16:47.380 "name": "BaseBdev4", 00:16:47.380 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:47.380 "is_configured": true, 00:16:47.380 "data_offset": 0, 00:16:47.380 "data_size": 65536 00:16:47.380 } 00:16:47.380 ] 00:16:47.380 }' 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.380 10:39:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fe75d063-29a1-4ba9-870c-1ee271664060 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 [2024-11-20 10:39:51.341335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:47.951 [2024-11-20 10:39:51.341486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:47.951 [2024-11-20 10:39:51.341514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:47.951 [2024-11-20 10:39:51.341832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:47.951 [2024-11-20 10:39:51.348993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:47.951 [2024-11-20 10:39:51.349057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:47.951 [2024-11-20 10:39:51.349398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.951 NewBaseBdev 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 [ 00:16:47.951 { 00:16:47.951 "name": "NewBaseBdev", 00:16:47.951 "aliases": [ 00:16:47.951 "fe75d063-29a1-4ba9-870c-1ee271664060" 00:16:47.951 ], 00:16:47.951 "product_name": "Malloc disk", 00:16:47.951 "block_size": 512, 00:16:47.951 "num_blocks": 65536, 00:16:47.951 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:47.951 "assigned_rate_limits": { 00:16:47.951 "rw_ios_per_sec": 0, 00:16:47.951 "rw_mbytes_per_sec": 0, 00:16:47.951 "r_mbytes_per_sec": 0, 00:16:47.951 "w_mbytes_per_sec": 0 00:16:47.951 }, 00:16:47.951 "claimed": true, 00:16:47.951 "claim_type": "exclusive_write", 00:16:47.951 "zoned": false, 00:16:47.951 "supported_io_types": { 00:16:47.951 "read": true, 00:16:47.951 "write": true, 00:16:47.951 "unmap": true, 00:16:47.951 "flush": true, 00:16:47.951 "reset": true, 00:16:47.951 "nvme_admin": false, 00:16:47.951 "nvme_io": false, 00:16:47.951 "nvme_io_md": false, 00:16:47.951 "write_zeroes": true, 00:16:47.951 "zcopy": true, 00:16:47.951 "get_zone_info": false, 00:16:47.951 "zone_management": false, 00:16:47.951 "zone_append": false, 00:16:47.951 "compare": false, 00:16:47.951 "compare_and_write": false, 00:16:47.951 "abort": true, 00:16:47.951 "seek_hole": false, 00:16:47.951 "seek_data": false, 00:16:47.951 "copy": true, 00:16:47.951 "nvme_iov_md": false 00:16:47.951 }, 00:16:47.951 "memory_domains": [ 00:16:47.951 { 00:16:47.951 "dma_device_id": "system", 00:16:47.951 "dma_device_type": 1 00:16:47.951 }, 00:16:47.951 { 00:16:47.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.951 "dma_device_type": 2 00:16:47.951 } 00:16:47.951 ], 00:16:47.951 "driver_specific": {} 00:16:47.951 } 00:16:47.951 ] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:47.951 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.952 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.211 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.211 "name": "Existed_Raid", 00:16:48.211 "uuid": "8027d1b1-4b0d-40b0-af47-c0a3fbcd80e5", 00:16:48.211 "strip_size_kb": 64, 00:16:48.211 "state": "online", 00:16:48.211 "raid_level": "raid5f", 00:16:48.211 "superblock": false, 00:16:48.211 "num_base_bdevs": 4, 00:16:48.211 "num_base_bdevs_discovered": 4, 00:16:48.211 "num_base_bdevs_operational": 4, 00:16:48.211 "base_bdevs_list": [ 00:16:48.211 { 00:16:48.211 "name": "NewBaseBdev", 00:16:48.211 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:48.211 "is_configured": true, 00:16:48.211 "data_offset": 0, 00:16:48.211 "data_size": 65536 00:16:48.211 }, 00:16:48.211 { 00:16:48.211 "name": "BaseBdev2", 00:16:48.211 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:48.211 "is_configured": true, 00:16:48.211 "data_offset": 0, 00:16:48.211 "data_size": 65536 00:16:48.211 }, 00:16:48.211 { 00:16:48.211 "name": "BaseBdev3", 00:16:48.211 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:48.211 "is_configured": true, 00:16:48.211 "data_offset": 0, 00:16:48.211 "data_size": 65536 00:16:48.211 }, 00:16:48.211 { 00:16:48.211 "name": "BaseBdev4", 00:16:48.211 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:48.211 "is_configured": true, 00:16:48.211 "data_offset": 0, 00:16:48.211 "data_size": 65536 00:16:48.211 } 00:16:48.211 ] 00:16:48.211 }' 00:16:48.211 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.211 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.469 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:48.469 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.470 [2024-11-20 10:39:51.837576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.470 "name": "Existed_Raid", 00:16:48.470 "aliases": [ 00:16:48.470 "8027d1b1-4b0d-40b0-af47-c0a3fbcd80e5" 00:16:48.470 ], 00:16:48.470 "product_name": "Raid Volume", 00:16:48.470 "block_size": 512, 00:16:48.470 "num_blocks": 196608, 00:16:48.470 "uuid": "8027d1b1-4b0d-40b0-af47-c0a3fbcd80e5", 00:16:48.470 "assigned_rate_limits": { 00:16:48.470 "rw_ios_per_sec": 0, 00:16:48.470 "rw_mbytes_per_sec": 0, 00:16:48.470 "r_mbytes_per_sec": 0, 00:16:48.470 "w_mbytes_per_sec": 0 00:16:48.470 }, 00:16:48.470 "claimed": false, 00:16:48.470 "zoned": false, 00:16:48.470 "supported_io_types": { 00:16:48.470 "read": true, 00:16:48.470 "write": true, 00:16:48.470 "unmap": false, 00:16:48.470 "flush": false, 00:16:48.470 "reset": true, 00:16:48.470 "nvme_admin": false, 00:16:48.470 "nvme_io": false, 00:16:48.470 "nvme_io_md": false, 00:16:48.470 "write_zeroes": true, 00:16:48.470 "zcopy": false, 00:16:48.470 "get_zone_info": false, 00:16:48.470 "zone_management": false, 00:16:48.470 "zone_append": false, 00:16:48.470 "compare": false, 00:16:48.470 "compare_and_write": false, 00:16:48.470 "abort": false, 00:16:48.470 "seek_hole": false, 00:16:48.470 "seek_data": false, 00:16:48.470 "copy": false, 00:16:48.470 "nvme_iov_md": false 00:16:48.470 }, 00:16:48.470 "driver_specific": { 00:16:48.470 "raid": { 00:16:48.470 "uuid": "8027d1b1-4b0d-40b0-af47-c0a3fbcd80e5", 00:16:48.470 "strip_size_kb": 64, 00:16:48.470 "state": "online", 00:16:48.470 "raid_level": "raid5f", 00:16:48.470 "superblock": false, 00:16:48.470 "num_base_bdevs": 4, 00:16:48.470 "num_base_bdevs_discovered": 4, 00:16:48.470 "num_base_bdevs_operational": 4, 00:16:48.470 "base_bdevs_list": [ 00:16:48.470 { 00:16:48.470 "name": "NewBaseBdev", 00:16:48.470 "uuid": "fe75d063-29a1-4ba9-870c-1ee271664060", 00:16:48.470 "is_configured": true, 00:16:48.470 "data_offset": 0, 00:16:48.470 "data_size": 65536 00:16:48.470 }, 00:16:48.470 { 00:16:48.470 "name": "BaseBdev2", 00:16:48.470 "uuid": "7ac3fd3a-be7a-403a-962f-b3c8b1942147", 00:16:48.470 "is_configured": true, 00:16:48.470 "data_offset": 0, 00:16:48.470 "data_size": 65536 00:16:48.470 }, 00:16:48.470 { 00:16:48.470 "name": "BaseBdev3", 00:16:48.470 "uuid": "3208ff9b-58d5-4c85-847b-13932e0585ee", 00:16:48.470 "is_configured": true, 00:16:48.470 "data_offset": 0, 00:16:48.470 "data_size": 65536 00:16:48.470 }, 00:16:48.470 { 00:16:48.470 "name": "BaseBdev4", 00:16:48.470 "uuid": "8c7f3869-9f59-4bec-a2a3-9c8575b2a297", 00:16:48.470 "is_configured": true, 00:16:48.470 "data_offset": 0, 00:16:48.470 "data_size": 65536 00:16:48.470 } 00:16:48.470 ] 00:16:48.470 } 00:16:48.470 } 00:16:48.470 }' 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:48.470 BaseBdev2 00:16:48.470 BaseBdev3 00:16:48.470 BaseBdev4' 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.470 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.729 10:39:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.729 [2024-11-20 10:39:52.144789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.729 [2024-11-20 10:39:52.144858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.729 [2024-11-20 10:39:52.144945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.729 [2024-11-20 10:39:52.145274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.729 [2024-11-20 10:39:52.145329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82934 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82934 ']' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82934 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82934 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.729 killing process with pid 82934 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82934' 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82934 00:16:48.729 [2024-11-20 10:39:52.192223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.729 10:39:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82934 00:16:49.296 [2024-11-20 10:39:52.576911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:50.259 00:16:50.259 real 0m11.443s 00:16:50.259 user 0m18.234s 00:16:50.259 sys 0m2.053s 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.259 ************************************ 00:16:50.259 END TEST raid5f_state_function_test 00:16:50.259 ************************************ 00:16:50.259 10:39:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:50.259 10:39:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:50.259 10:39:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.259 10:39:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.259 ************************************ 00:16:50.259 START TEST raid5f_state_function_test_sb 00:16:50.259 ************************************ 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83605 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:50.259 Process raid pid: 83605 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83605' 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83605 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83605 ']' 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.259 10:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.519 [2024-11-20 10:39:53.796891] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:50.519 [2024-11-20 10:39:53.797006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.519 [2024-11-20 10:39:53.971676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.779 [2024-11-20 10:39:54.082004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.038 [2024-11-20 10:39:54.276837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.038 [2024-11-20 10:39:54.276878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 [2024-11-20 10:39:54.623590] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.298 [2024-11-20 10:39:54.623642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.298 [2024-11-20 10:39:54.623652] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.298 [2024-11-20 10:39:54.623678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.298 [2024-11-20 10:39:54.623685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.298 [2024-11-20 10:39:54.623694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.298 [2024-11-20 10:39:54.623700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.298 [2024-11-20 10:39:54.623708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.298 "name": "Existed_Raid", 00:16:51.298 "uuid": "20ce39d7-35ec-4678-91e4-7977f386e9a0", 00:16:51.298 "strip_size_kb": 64, 00:16:51.298 "state": "configuring", 00:16:51.298 "raid_level": "raid5f", 00:16:51.298 "superblock": true, 00:16:51.298 "num_base_bdevs": 4, 00:16:51.298 "num_base_bdevs_discovered": 0, 00:16:51.298 "num_base_bdevs_operational": 4, 00:16:51.298 "base_bdevs_list": [ 00:16:51.298 { 00:16:51.298 "name": "BaseBdev1", 00:16:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.298 "is_configured": false, 00:16:51.298 "data_offset": 0, 00:16:51.298 "data_size": 0 00:16:51.298 }, 00:16:51.298 { 00:16:51.298 "name": "BaseBdev2", 00:16:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.298 "is_configured": false, 00:16:51.298 "data_offset": 0, 00:16:51.298 "data_size": 0 00:16:51.298 }, 00:16:51.298 { 00:16:51.298 "name": "BaseBdev3", 00:16:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.298 "is_configured": false, 00:16:51.298 "data_offset": 0, 00:16:51.298 "data_size": 0 00:16:51.298 }, 00:16:51.298 { 00:16:51.298 "name": "BaseBdev4", 00:16:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.298 "is_configured": false, 00:16:51.298 "data_offset": 0, 00:16:51.298 "data_size": 0 00:16:51.298 } 00:16:51.298 ] 00:16:51.298 }' 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.298 10:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.556 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.556 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.556 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.815 [2024-11-20 10:39:55.034923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.815 [2024-11-20 10:39:55.035011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.815 [2024-11-20 10:39:55.046906] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.815 [2024-11-20 10:39:55.046992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.815 [2024-11-20 10:39:55.047024] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.815 [2024-11-20 10:39:55.047050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.815 [2024-11-20 10:39:55.047071] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.815 [2024-11-20 10:39:55.047095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.815 [2024-11-20 10:39:55.047116] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.815 [2024-11-20 10:39:55.047139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.815 [2024-11-20 10:39:55.094475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.815 BaseBdev1 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.815 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.815 [ 00:16:51.815 { 00:16:51.815 "name": "BaseBdev1", 00:16:51.816 "aliases": [ 00:16:51.816 "a0411171-0f41-4655-97e3-8ab4d6806126" 00:16:51.816 ], 00:16:51.816 "product_name": "Malloc disk", 00:16:51.816 "block_size": 512, 00:16:51.816 "num_blocks": 65536, 00:16:51.816 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:51.816 "assigned_rate_limits": { 00:16:51.816 "rw_ios_per_sec": 0, 00:16:51.816 "rw_mbytes_per_sec": 0, 00:16:51.816 "r_mbytes_per_sec": 0, 00:16:51.816 "w_mbytes_per_sec": 0 00:16:51.816 }, 00:16:51.816 "claimed": true, 00:16:51.816 "claim_type": "exclusive_write", 00:16:51.816 "zoned": false, 00:16:51.816 "supported_io_types": { 00:16:51.816 "read": true, 00:16:51.816 "write": true, 00:16:51.816 "unmap": true, 00:16:51.816 "flush": true, 00:16:51.816 "reset": true, 00:16:51.816 "nvme_admin": false, 00:16:51.816 "nvme_io": false, 00:16:51.816 "nvme_io_md": false, 00:16:51.816 "write_zeroes": true, 00:16:51.816 "zcopy": true, 00:16:51.816 "get_zone_info": false, 00:16:51.816 "zone_management": false, 00:16:51.816 "zone_append": false, 00:16:51.816 "compare": false, 00:16:51.816 "compare_and_write": false, 00:16:51.816 "abort": true, 00:16:51.816 "seek_hole": false, 00:16:51.816 "seek_data": false, 00:16:51.816 "copy": true, 00:16:51.816 "nvme_iov_md": false 00:16:51.816 }, 00:16:51.816 "memory_domains": [ 00:16:51.816 { 00:16:51.816 "dma_device_id": "system", 00:16:51.816 "dma_device_type": 1 00:16:51.816 }, 00:16:51.816 { 00:16:51.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.816 "dma_device_type": 2 00:16:51.816 } 00:16:51.816 ], 00:16:51.816 "driver_specific": {} 00:16:51.816 } 00:16:51.816 ] 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.816 "name": "Existed_Raid", 00:16:51.816 "uuid": "8f7cd2ae-ad79-44f9-8308-f1b12c51e9f7", 00:16:51.816 "strip_size_kb": 64, 00:16:51.816 "state": "configuring", 00:16:51.816 "raid_level": "raid5f", 00:16:51.816 "superblock": true, 00:16:51.816 "num_base_bdevs": 4, 00:16:51.816 "num_base_bdevs_discovered": 1, 00:16:51.816 "num_base_bdevs_operational": 4, 00:16:51.816 "base_bdevs_list": [ 00:16:51.816 { 00:16:51.816 "name": "BaseBdev1", 00:16:51.816 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:51.816 "is_configured": true, 00:16:51.816 "data_offset": 2048, 00:16:51.816 "data_size": 63488 00:16:51.816 }, 00:16:51.816 { 00:16:51.816 "name": "BaseBdev2", 00:16:51.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.816 "is_configured": false, 00:16:51.816 "data_offset": 0, 00:16:51.816 "data_size": 0 00:16:51.816 }, 00:16:51.816 { 00:16:51.816 "name": "BaseBdev3", 00:16:51.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.816 "is_configured": false, 00:16:51.816 "data_offset": 0, 00:16:51.816 "data_size": 0 00:16:51.816 }, 00:16:51.816 { 00:16:51.816 "name": "BaseBdev4", 00:16:51.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.816 "is_configured": false, 00:16:51.816 "data_offset": 0, 00:16:51.816 "data_size": 0 00:16:51.816 } 00:16:51.816 ] 00:16:51.816 }' 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.816 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.383 [2024-11-20 10:39:55.585676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.383 [2024-11-20 10:39:55.585789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.383 [2024-11-20 10:39:55.597708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.383 [2024-11-20 10:39:55.599536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.383 [2024-11-20 10:39:55.599612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.383 [2024-11-20 10:39:55.599640] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.383 [2024-11-20 10:39:55.599664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.383 [2024-11-20 10:39:55.599681] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:52.383 [2024-11-20 10:39:55.599692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.383 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.383 "name": "Existed_Raid", 00:16:52.383 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:52.383 "strip_size_kb": 64, 00:16:52.383 "state": "configuring", 00:16:52.383 "raid_level": "raid5f", 00:16:52.383 "superblock": true, 00:16:52.383 "num_base_bdevs": 4, 00:16:52.383 "num_base_bdevs_discovered": 1, 00:16:52.383 "num_base_bdevs_operational": 4, 00:16:52.383 "base_bdevs_list": [ 00:16:52.383 { 00:16:52.383 "name": "BaseBdev1", 00:16:52.383 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:52.383 "is_configured": true, 00:16:52.383 "data_offset": 2048, 00:16:52.383 "data_size": 63488 00:16:52.383 }, 00:16:52.383 { 00:16:52.383 "name": "BaseBdev2", 00:16:52.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.383 "is_configured": false, 00:16:52.383 "data_offset": 0, 00:16:52.383 "data_size": 0 00:16:52.383 }, 00:16:52.383 { 00:16:52.383 "name": "BaseBdev3", 00:16:52.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.383 "is_configured": false, 00:16:52.383 "data_offset": 0, 00:16:52.383 "data_size": 0 00:16:52.383 }, 00:16:52.383 { 00:16:52.383 "name": "BaseBdev4", 00:16:52.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.383 "is_configured": false, 00:16:52.383 "data_offset": 0, 00:16:52.383 "data_size": 0 00:16:52.383 } 00:16:52.383 ] 00:16:52.383 }' 00:16:52.384 10:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.384 10:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.642 [2024-11-20 10:39:56.103204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.642 BaseBdev2 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.642 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.900 [ 00:16:52.900 { 00:16:52.900 "name": "BaseBdev2", 00:16:52.900 "aliases": [ 00:16:52.900 "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc" 00:16:52.900 ], 00:16:52.900 "product_name": "Malloc disk", 00:16:52.900 "block_size": 512, 00:16:52.900 "num_blocks": 65536, 00:16:52.900 "uuid": "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc", 00:16:52.900 "assigned_rate_limits": { 00:16:52.900 "rw_ios_per_sec": 0, 00:16:52.900 "rw_mbytes_per_sec": 0, 00:16:52.900 "r_mbytes_per_sec": 0, 00:16:52.900 "w_mbytes_per_sec": 0 00:16:52.900 }, 00:16:52.900 "claimed": true, 00:16:52.900 "claim_type": "exclusive_write", 00:16:52.900 "zoned": false, 00:16:52.900 "supported_io_types": { 00:16:52.900 "read": true, 00:16:52.900 "write": true, 00:16:52.900 "unmap": true, 00:16:52.900 "flush": true, 00:16:52.900 "reset": true, 00:16:52.900 "nvme_admin": false, 00:16:52.900 "nvme_io": false, 00:16:52.900 "nvme_io_md": false, 00:16:52.900 "write_zeroes": true, 00:16:52.900 "zcopy": true, 00:16:52.900 "get_zone_info": false, 00:16:52.900 "zone_management": false, 00:16:52.900 "zone_append": false, 00:16:52.900 "compare": false, 00:16:52.900 "compare_and_write": false, 00:16:52.900 "abort": true, 00:16:52.900 "seek_hole": false, 00:16:52.900 "seek_data": false, 00:16:52.900 "copy": true, 00:16:52.900 "nvme_iov_md": false 00:16:52.900 }, 00:16:52.900 "memory_domains": [ 00:16:52.900 { 00:16:52.900 "dma_device_id": "system", 00:16:52.900 "dma_device_type": 1 00:16:52.900 }, 00:16:52.900 { 00:16:52.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.900 "dma_device_type": 2 00:16:52.900 } 00:16:52.900 ], 00:16:52.900 "driver_specific": {} 00:16:52.900 } 00:16:52.900 ] 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.900 "name": "Existed_Raid", 00:16:52.900 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:52.900 "strip_size_kb": 64, 00:16:52.900 "state": "configuring", 00:16:52.900 "raid_level": "raid5f", 00:16:52.900 "superblock": true, 00:16:52.900 "num_base_bdevs": 4, 00:16:52.900 "num_base_bdevs_discovered": 2, 00:16:52.900 "num_base_bdevs_operational": 4, 00:16:52.900 "base_bdevs_list": [ 00:16:52.900 { 00:16:52.900 "name": "BaseBdev1", 00:16:52.900 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:52.900 "is_configured": true, 00:16:52.900 "data_offset": 2048, 00:16:52.900 "data_size": 63488 00:16:52.900 }, 00:16:52.900 { 00:16:52.900 "name": "BaseBdev2", 00:16:52.900 "uuid": "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc", 00:16:52.900 "is_configured": true, 00:16:52.900 "data_offset": 2048, 00:16:52.900 "data_size": 63488 00:16:52.900 }, 00:16:52.900 { 00:16:52.900 "name": "BaseBdev3", 00:16:52.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.900 "is_configured": false, 00:16:52.900 "data_offset": 0, 00:16:52.900 "data_size": 0 00:16:52.900 }, 00:16:52.900 { 00:16:52.900 "name": "BaseBdev4", 00:16:52.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.900 "is_configured": false, 00:16:52.900 "data_offset": 0, 00:16:52.900 "data_size": 0 00:16:52.900 } 00:16:52.900 ] 00:16:52.900 }' 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.900 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.159 [2024-11-20 10:39:56.618024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.159 BaseBdev3 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.159 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.419 [ 00:16:53.419 { 00:16:53.419 "name": "BaseBdev3", 00:16:53.419 "aliases": [ 00:16:53.419 "b88dcff4-e230-46f1-a89c-3f7cdbe6cd7e" 00:16:53.419 ], 00:16:53.419 "product_name": "Malloc disk", 00:16:53.419 "block_size": 512, 00:16:53.419 "num_blocks": 65536, 00:16:53.419 "uuid": "b88dcff4-e230-46f1-a89c-3f7cdbe6cd7e", 00:16:53.419 "assigned_rate_limits": { 00:16:53.419 "rw_ios_per_sec": 0, 00:16:53.419 "rw_mbytes_per_sec": 0, 00:16:53.419 "r_mbytes_per_sec": 0, 00:16:53.419 "w_mbytes_per_sec": 0 00:16:53.419 }, 00:16:53.419 "claimed": true, 00:16:53.419 "claim_type": "exclusive_write", 00:16:53.419 "zoned": false, 00:16:53.419 "supported_io_types": { 00:16:53.419 "read": true, 00:16:53.419 "write": true, 00:16:53.419 "unmap": true, 00:16:53.419 "flush": true, 00:16:53.419 "reset": true, 00:16:53.419 "nvme_admin": false, 00:16:53.419 "nvme_io": false, 00:16:53.419 "nvme_io_md": false, 00:16:53.419 "write_zeroes": true, 00:16:53.419 "zcopy": true, 00:16:53.419 "get_zone_info": false, 00:16:53.419 "zone_management": false, 00:16:53.419 "zone_append": false, 00:16:53.419 "compare": false, 00:16:53.419 "compare_and_write": false, 00:16:53.419 "abort": true, 00:16:53.419 "seek_hole": false, 00:16:53.419 "seek_data": false, 00:16:53.419 "copy": true, 00:16:53.419 "nvme_iov_md": false 00:16:53.419 }, 00:16:53.419 "memory_domains": [ 00:16:53.419 { 00:16:53.419 "dma_device_id": "system", 00:16:53.419 "dma_device_type": 1 00:16:53.419 }, 00:16:53.419 { 00:16:53.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.419 "dma_device_type": 2 00:16:53.419 } 00:16:53.419 ], 00:16:53.419 "driver_specific": {} 00:16:53.419 } 00:16:53.419 ] 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.419 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.420 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.420 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.420 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.420 "name": "Existed_Raid", 00:16:53.420 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:53.420 "strip_size_kb": 64, 00:16:53.420 "state": "configuring", 00:16:53.420 "raid_level": "raid5f", 00:16:53.420 "superblock": true, 00:16:53.420 "num_base_bdevs": 4, 00:16:53.420 "num_base_bdevs_discovered": 3, 00:16:53.420 "num_base_bdevs_operational": 4, 00:16:53.420 "base_bdevs_list": [ 00:16:53.420 { 00:16:53.420 "name": "BaseBdev1", 00:16:53.420 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:53.420 "is_configured": true, 00:16:53.420 "data_offset": 2048, 00:16:53.420 "data_size": 63488 00:16:53.420 }, 00:16:53.420 { 00:16:53.420 "name": "BaseBdev2", 00:16:53.420 "uuid": "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc", 00:16:53.420 "is_configured": true, 00:16:53.420 "data_offset": 2048, 00:16:53.420 "data_size": 63488 00:16:53.420 }, 00:16:53.420 { 00:16:53.420 "name": "BaseBdev3", 00:16:53.420 "uuid": "b88dcff4-e230-46f1-a89c-3f7cdbe6cd7e", 00:16:53.420 "is_configured": true, 00:16:53.420 "data_offset": 2048, 00:16:53.420 "data_size": 63488 00:16:53.420 }, 00:16:53.420 { 00:16:53.420 "name": "BaseBdev4", 00:16:53.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.420 "is_configured": false, 00:16:53.420 "data_offset": 0, 00:16:53.420 "data_size": 0 00:16:53.420 } 00:16:53.420 ] 00:16:53.420 }' 00:16:53.420 10:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.420 10:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.680 [2024-11-20 10:39:57.144912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:53.680 [2024-11-20 10:39:57.145284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:53.680 [2024-11-20 10:39:57.145342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:53.680 [2024-11-20 10:39:57.145640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:53.680 BaseBdev4 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.680 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.680 [2024-11-20 10:39:57.152554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:53.680 [2024-11-20 10:39:57.152615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:53.680 [2024-11-20 10:39:57.152846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.940 [ 00:16:53.940 { 00:16:53.940 "name": "BaseBdev4", 00:16:53.940 "aliases": [ 00:16:53.940 "e88a933e-7bcb-4663-acb9-673ef6fd959a" 00:16:53.940 ], 00:16:53.940 "product_name": "Malloc disk", 00:16:53.940 "block_size": 512, 00:16:53.940 "num_blocks": 65536, 00:16:53.940 "uuid": "e88a933e-7bcb-4663-acb9-673ef6fd959a", 00:16:53.940 "assigned_rate_limits": { 00:16:53.940 "rw_ios_per_sec": 0, 00:16:53.940 "rw_mbytes_per_sec": 0, 00:16:53.940 "r_mbytes_per_sec": 0, 00:16:53.940 "w_mbytes_per_sec": 0 00:16:53.940 }, 00:16:53.940 "claimed": true, 00:16:53.940 "claim_type": "exclusive_write", 00:16:53.940 "zoned": false, 00:16:53.940 "supported_io_types": { 00:16:53.940 "read": true, 00:16:53.940 "write": true, 00:16:53.940 "unmap": true, 00:16:53.940 "flush": true, 00:16:53.940 "reset": true, 00:16:53.940 "nvme_admin": false, 00:16:53.940 "nvme_io": false, 00:16:53.940 "nvme_io_md": false, 00:16:53.940 "write_zeroes": true, 00:16:53.940 "zcopy": true, 00:16:53.940 "get_zone_info": false, 00:16:53.940 "zone_management": false, 00:16:53.940 "zone_append": false, 00:16:53.940 "compare": false, 00:16:53.940 "compare_and_write": false, 00:16:53.940 "abort": true, 00:16:53.940 "seek_hole": false, 00:16:53.940 "seek_data": false, 00:16:53.940 "copy": true, 00:16:53.940 "nvme_iov_md": false 00:16:53.940 }, 00:16:53.940 "memory_domains": [ 00:16:53.940 { 00:16:53.940 "dma_device_id": "system", 00:16:53.940 "dma_device_type": 1 00:16:53.940 }, 00:16:53.940 { 00:16:53.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.940 "dma_device_type": 2 00:16:53.940 } 00:16:53.940 ], 00:16:53.940 "driver_specific": {} 00:16:53.940 } 00:16:53.940 ] 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.940 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.941 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.941 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.941 "name": "Existed_Raid", 00:16:53.941 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:53.941 "strip_size_kb": 64, 00:16:53.941 "state": "online", 00:16:53.941 "raid_level": "raid5f", 00:16:53.941 "superblock": true, 00:16:53.941 "num_base_bdevs": 4, 00:16:53.941 "num_base_bdevs_discovered": 4, 00:16:53.941 "num_base_bdevs_operational": 4, 00:16:53.941 "base_bdevs_list": [ 00:16:53.941 { 00:16:53.941 "name": "BaseBdev1", 00:16:53.941 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:53.941 "is_configured": true, 00:16:53.941 "data_offset": 2048, 00:16:53.941 "data_size": 63488 00:16:53.941 }, 00:16:53.941 { 00:16:53.941 "name": "BaseBdev2", 00:16:53.941 "uuid": "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc", 00:16:53.941 "is_configured": true, 00:16:53.941 "data_offset": 2048, 00:16:53.941 "data_size": 63488 00:16:53.941 }, 00:16:53.941 { 00:16:53.941 "name": "BaseBdev3", 00:16:53.941 "uuid": "b88dcff4-e230-46f1-a89c-3f7cdbe6cd7e", 00:16:53.941 "is_configured": true, 00:16:53.941 "data_offset": 2048, 00:16:53.941 "data_size": 63488 00:16:53.941 }, 00:16:53.941 { 00:16:53.941 "name": "BaseBdev4", 00:16:53.941 "uuid": "e88a933e-7bcb-4663-acb9-673ef6fd959a", 00:16:53.941 "is_configured": true, 00:16:53.941 "data_offset": 2048, 00:16:53.941 "data_size": 63488 00:16:53.941 } 00:16:53.941 ] 00:16:53.941 }' 00:16:53.941 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.941 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.201 [2024-11-20 10:39:57.632376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.201 "name": "Existed_Raid", 00:16:54.201 "aliases": [ 00:16:54.201 "ac85b330-46dd-4795-8ebb-1c011311566a" 00:16:54.201 ], 00:16:54.201 "product_name": "Raid Volume", 00:16:54.201 "block_size": 512, 00:16:54.201 "num_blocks": 190464, 00:16:54.201 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:54.201 "assigned_rate_limits": { 00:16:54.201 "rw_ios_per_sec": 0, 00:16:54.201 "rw_mbytes_per_sec": 0, 00:16:54.201 "r_mbytes_per_sec": 0, 00:16:54.201 "w_mbytes_per_sec": 0 00:16:54.201 }, 00:16:54.201 "claimed": false, 00:16:54.201 "zoned": false, 00:16:54.201 "supported_io_types": { 00:16:54.201 "read": true, 00:16:54.201 "write": true, 00:16:54.201 "unmap": false, 00:16:54.201 "flush": false, 00:16:54.201 "reset": true, 00:16:54.201 "nvme_admin": false, 00:16:54.201 "nvme_io": false, 00:16:54.201 "nvme_io_md": false, 00:16:54.201 "write_zeroes": true, 00:16:54.201 "zcopy": false, 00:16:54.201 "get_zone_info": false, 00:16:54.201 "zone_management": false, 00:16:54.201 "zone_append": false, 00:16:54.201 "compare": false, 00:16:54.201 "compare_and_write": false, 00:16:54.201 "abort": false, 00:16:54.201 "seek_hole": false, 00:16:54.201 "seek_data": false, 00:16:54.201 "copy": false, 00:16:54.201 "nvme_iov_md": false 00:16:54.201 }, 00:16:54.201 "driver_specific": { 00:16:54.201 "raid": { 00:16:54.201 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:54.201 "strip_size_kb": 64, 00:16:54.201 "state": "online", 00:16:54.201 "raid_level": "raid5f", 00:16:54.201 "superblock": true, 00:16:54.201 "num_base_bdevs": 4, 00:16:54.201 "num_base_bdevs_discovered": 4, 00:16:54.201 "num_base_bdevs_operational": 4, 00:16:54.201 "base_bdevs_list": [ 00:16:54.201 { 00:16:54.201 "name": "BaseBdev1", 00:16:54.201 "uuid": "a0411171-0f41-4655-97e3-8ab4d6806126", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 2048, 00:16:54.201 "data_size": 63488 00:16:54.201 }, 00:16:54.201 { 00:16:54.201 "name": "BaseBdev2", 00:16:54.201 "uuid": "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 2048, 00:16:54.201 "data_size": 63488 00:16:54.201 }, 00:16:54.201 { 00:16:54.201 "name": "BaseBdev3", 00:16:54.201 "uuid": "b88dcff4-e230-46f1-a89c-3f7cdbe6cd7e", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 2048, 00:16:54.201 "data_size": 63488 00:16:54.201 }, 00:16:54.201 { 00:16:54.201 "name": "BaseBdev4", 00:16:54.201 "uuid": "e88a933e-7bcb-4663-acb9-673ef6fd959a", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 2048, 00:16:54.201 "data_size": 63488 00:16:54.201 } 00:16:54.201 ] 00:16:54.201 } 00:16:54.201 } 00:16:54.201 }' 00:16:54.201 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:54.461 BaseBdev2 00:16:54.461 BaseBdev3 00:16:54.461 BaseBdev4' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.461 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.462 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.722 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.722 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.722 10:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:54.722 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.722 10:39:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.722 [2024-11-20 10:39:57.955616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.722 "name": "Existed_Raid", 00:16:54.722 "uuid": "ac85b330-46dd-4795-8ebb-1c011311566a", 00:16:54.722 "strip_size_kb": 64, 00:16:54.722 "state": "online", 00:16:54.722 "raid_level": "raid5f", 00:16:54.722 "superblock": true, 00:16:54.722 "num_base_bdevs": 4, 00:16:54.722 "num_base_bdevs_discovered": 3, 00:16:54.722 "num_base_bdevs_operational": 3, 00:16:54.722 "base_bdevs_list": [ 00:16:54.722 { 00:16:54.722 "name": null, 00:16:54.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.722 "is_configured": false, 00:16:54.722 "data_offset": 0, 00:16:54.722 "data_size": 63488 00:16:54.722 }, 00:16:54.722 { 00:16:54.722 "name": "BaseBdev2", 00:16:54.722 "uuid": "74fdca9e-2ec0-41b1-89c2-15ea5a0809fc", 00:16:54.722 "is_configured": true, 00:16:54.722 "data_offset": 2048, 00:16:54.722 "data_size": 63488 00:16:54.722 }, 00:16:54.722 { 00:16:54.722 "name": "BaseBdev3", 00:16:54.722 "uuid": "b88dcff4-e230-46f1-a89c-3f7cdbe6cd7e", 00:16:54.722 "is_configured": true, 00:16:54.722 "data_offset": 2048, 00:16:54.722 "data_size": 63488 00:16:54.722 }, 00:16:54.722 { 00:16:54.722 "name": "BaseBdev4", 00:16:54.722 "uuid": "e88a933e-7bcb-4663-acb9-673ef6fd959a", 00:16:54.722 "is_configured": true, 00:16:54.722 "data_offset": 2048, 00:16:54.722 "data_size": 63488 00:16:54.722 } 00:16:54.722 ] 00:16:54.722 }' 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.722 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.291 [2024-11-20 10:39:58.572744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.291 [2024-11-20 10:39:58.572972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.291 [2024-11-20 10:39:58.668565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.291 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.291 [2024-11-20 10:39:58.724548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.551 [2024-11-20 10:39:58.864985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:55.551 [2024-11-20 10:39:58.865081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.551 10:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.551 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.812 BaseBdev2 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.812 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.812 [ 00:16:55.812 { 00:16:55.812 "name": "BaseBdev2", 00:16:55.812 "aliases": [ 00:16:55.812 "88d3d76d-8004-40c7-9dac-ce374b3adaa1" 00:16:55.812 ], 00:16:55.812 "product_name": "Malloc disk", 00:16:55.812 "block_size": 512, 00:16:55.812 "num_blocks": 65536, 00:16:55.812 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:55.812 "assigned_rate_limits": { 00:16:55.812 "rw_ios_per_sec": 0, 00:16:55.812 "rw_mbytes_per_sec": 0, 00:16:55.812 "r_mbytes_per_sec": 0, 00:16:55.812 "w_mbytes_per_sec": 0 00:16:55.812 }, 00:16:55.812 "claimed": false, 00:16:55.812 "zoned": false, 00:16:55.812 "supported_io_types": { 00:16:55.812 "read": true, 00:16:55.812 "write": true, 00:16:55.812 "unmap": true, 00:16:55.812 "flush": true, 00:16:55.812 "reset": true, 00:16:55.813 "nvme_admin": false, 00:16:55.813 "nvme_io": false, 00:16:55.813 "nvme_io_md": false, 00:16:55.813 "write_zeroes": true, 00:16:55.813 "zcopy": true, 00:16:55.813 "get_zone_info": false, 00:16:55.813 "zone_management": false, 00:16:55.813 "zone_append": false, 00:16:55.813 "compare": false, 00:16:55.813 "compare_and_write": false, 00:16:55.813 "abort": true, 00:16:55.813 "seek_hole": false, 00:16:55.813 "seek_data": false, 00:16:55.813 "copy": true, 00:16:55.813 "nvme_iov_md": false 00:16:55.813 }, 00:16:55.813 "memory_domains": [ 00:16:55.813 { 00:16:55.813 "dma_device_id": "system", 00:16:55.813 "dma_device_type": 1 00:16:55.813 }, 00:16:55.813 { 00:16:55.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.813 "dma_device_type": 2 00:16:55.813 } 00:16:55.813 ], 00:16:55.813 "driver_specific": {} 00:16:55.813 } 00:16:55.813 ] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 BaseBdev3 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 [ 00:16:55.813 { 00:16:55.813 "name": "BaseBdev3", 00:16:55.813 "aliases": [ 00:16:55.813 "4b937429-5efb-4ecd-a207-04392711d89a" 00:16:55.813 ], 00:16:55.813 "product_name": "Malloc disk", 00:16:55.813 "block_size": 512, 00:16:55.813 "num_blocks": 65536, 00:16:55.813 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:55.813 "assigned_rate_limits": { 00:16:55.813 "rw_ios_per_sec": 0, 00:16:55.813 "rw_mbytes_per_sec": 0, 00:16:55.813 "r_mbytes_per_sec": 0, 00:16:55.813 "w_mbytes_per_sec": 0 00:16:55.813 }, 00:16:55.813 "claimed": false, 00:16:55.813 "zoned": false, 00:16:55.813 "supported_io_types": { 00:16:55.813 "read": true, 00:16:55.813 "write": true, 00:16:55.813 "unmap": true, 00:16:55.813 "flush": true, 00:16:55.813 "reset": true, 00:16:55.813 "nvme_admin": false, 00:16:55.813 "nvme_io": false, 00:16:55.813 "nvme_io_md": false, 00:16:55.813 "write_zeroes": true, 00:16:55.813 "zcopy": true, 00:16:55.813 "get_zone_info": false, 00:16:55.813 "zone_management": false, 00:16:55.813 "zone_append": false, 00:16:55.813 "compare": false, 00:16:55.813 "compare_and_write": false, 00:16:55.813 "abort": true, 00:16:55.813 "seek_hole": false, 00:16:55.813 "seek_data": false, 00:16:55.813 "copy": true, 00:16:55.813 "nvme_iov_md": false 00:16:55.813 }, 00:16:55.813 "memory_domains": [ 00:16:55.813 { 00:16:55.813 "dma_device_id": "system", 00:16:55.813 "dma_device_type": 1 00:16:55.813 }, 00:16:55.813 { 00:16:55.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.813 "dma_device_type": 2 00:16:55.813 } 00:16:55.813 ], 00:16:55.813 "driver_specific": {} 00:16:55.813 } 00:16:55.813 ] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 BaseBdev4 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 [ 00:16:55.813 { 00:16:55.813 "name": "BaseBdev4", 00:16:55.813 "aliases": [ 00:16:55.813 "68e2d63a-8f2d-4928-b3dc-4868602f5419" 00:16:55.813 ], 00:16:55.813 "product_name": "Malloc disk", 00:16:55.813 "block_size": 512, 00:16:55.813 "num_blocks": 65536, 00:16:55.813 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:55.813 "assigned_rate_limits": { 00:16:55.813 "rw_ios_per_sec": 0, 00:16:55.813 "rw_mbytes_per_sec": 0, 00:16:55.813 "r_mbytes_per_sec": 0, 00:16:55.813 "w_mbytes_per_sec": 0 00:16:55.813 }, 00:16:55.813 "claimed": false, 00:16:55.813 "zoned": false, 00:16:55.813 "supported_io_types": { 00:16:55.813 "read": true, 00:16:55.813 "write": true, 00:16:55.813 "unmap": true, 00:16:55.813 "flush": true, 00:16:55.813 "reset": true, 00:16:55.813 "nvme_admin": false, 00:16:55.813 "nvme_io": false, 00:16:55.813 "nvme_io_md": false, 00:16:55.813 "write_zeroes": true, 00:16:55.813 "zcopy": true, 00:16:55.813 "get_zone_info": false, 00:16:55.813 "zone_management": false, 00:16:55.813 "zone_append": false, 00:16:55.813 "compare": false, 00:16:55.813 "compare_and_write": false, 00:16:55.813 "abort": true, 00:16:55.813 "seek_hole": false, 00:16:55.813 "seek_data": false, 00:16:55.813 "copy": true, 00:16:55.813 "nvme_iov_md": false 00:16:55.813 }, 00:16:55.813 "memory_domains": [ 00:16:55.813 { 00:16:55.813 "dma_device_id": "system", 00:16:55.813 "dma_device_type": 1 00:16:55.813 }, 00:16:55.813 { 00:16:55.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.813 "dma_device_type": 2 00:16:55.813 } 00:16:55.813 ], 00:16:55.813 "driver_specific": {} 00:16:55.813 } 00:16:55.813 ] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 [2024-11-20 10:39:59.261016] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.813 [2024-11-20 10:39:59.261103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.813 [2024-11-20 10:39:59.261145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.813 [2024-11-20 10:39:59.262972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.813 [2024-11-20 10:39:59.263085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.814 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.073 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.073 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.073 "name": "Existed_Raid", 00:16:56.073 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:56.073 "strip_size_kb": 64, 00:16:56.073 "state": "configuring", 00:16:56.073 "raid_level": "raid5f", 00:16:56.073 "superblock": true, 00:16:56.073 "num_base_bdevs": 4, 00:16:56.073 "num_base_bdevs_discovered": 3, 00:16:56.073 "num_base_bdevs_operational": 4, 00:16:56.073 "base_bdevs_list": [ 00:16:56.073 { 00:16:56.073 "name": "BaseBdev1", 00:16:56.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.073 "is_configured": false, 00:16:56.073 "data_offset": 0, 00:16:56.073 "data_size": 0 00:16:56.073 }, 00:16:56.073 { 00:16:56.073 "name": "BaseBdev2", 00:16:56.073 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:56.073 "is_configured": true, 00:16:56.073 "data_offset": 2048, 00:16:56.073 "data_size": 63488 00:16:56.073 }, 00:16:56.073 { 00:16:56.073 "name": "BaseBdev3", 00:16:56.073 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:56.073 "is_configured": true, 00:16:56.073 "data_offset": 2048, 00:16:56.073 "data_size": 63488 00:16:56.073 }, 00:16:56.073 { 00:16:56.073 "name": "BaseBdev4", 00:16:56.073 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:56.073 "is_configured": true, 00:16:56.073 "data_offset": 2048, 00:16:56.073 "data_size": 63488 00:16:56.073 } 00:16:56.073 ] 00:16:56.073 }' 00:16:56.073 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.073 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.333 [2024-11-20 10:39:59.716251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.333 "name": "Existed_Raid", 00:16:56.333 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:56.333 "strip_size_kb": 64, 00:16:56.333 "state": "configuring", 00:16:56.333 "raid_level": "raid5f", 00:16:56.333 "superblock": true, 00:16:56.333 "num_base_bdevs": 4, 00:16:56.333 "num_base_bdevs_discovered": 2, 00:16:56.333 "num_base_bdevs_operational": 4, 00:16:56.333 "base_bdevs_list": [ 00:16:56.333 { 00:16:56.333 "name": "BaseBdev1", 00:16:56.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.333 "is_configured": false, 00:16:56.333 "data_offset": 0, 00:16:56.333 "data_size": 0 00:16:56.333 }, 00:16:56.333 { 00:16:56.333 "name": null, 00:16:56.333 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:56.333 "is_configured": false, 00:16:56.333 "data_offset": 0, 00:16:56.333 "data_size": 63488 00:16:56.333 }, 00:16:56.333 { 00:16:56.333 "name": "BaseBdev3", 00:16:56.333 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:56.333 "is_configured": true, 00:16:56.333 "data_offset": 2048, 00:16:56.333 "data_size": 63488 00:16:56.333 }, 00:16:56.333 { 00:16:56.333 "name": "BaseBdev4", 00:16:56.333 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:56.333 "is_configured": true, 00:16:56.333 "data_offset": 2048, 00:16:56.333 "data_size": 63488 00:16:56.333 } 00:16:56.333 ] 00:16:56.333 }' 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.333 10:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 [2024-11-20 10:40:00.266719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.902 BaseBdev1 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 [ 00:16:56.902 { 00:16:56.902 "name": "BaseBdev1", 00:16:56.902 "aliases": [ 00:16:56.902 "f52276dc-b02c-4d7f-9054-7ab8f272df7a" 00:16:56.902 ], 00:16:56.902 "product_name": "Malloc disk", 00:16:56.902 "block_size": 512, 00:16:56.902 "num_blocks": 65536, 00:16:56.902 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:56.902 "assigned_rate_limits": { 00:16:56.902 "rw_ios_per_sec": 0, 00:16:56.902 "rw_mbytes_per_sec": 0, 00:16:56.902 "r_mbytes_per_sec": 0, 00:16:56.902 "w_mbytes_per_sec": 0 00:16:56.902 }, 00:16:56.902 "claimed": true, 00:16:56.902 "claim_type": "exclusive_write", 00:16:56.902 "zoned": false, 00:16:56.902 "supported_io_types": { 00:16:56.902 "read": true, 00:16:56.902 "write": true, 00:16:56.902 "unmap": true, 00:16:56.902 "flush": true, 00:16:56.902 "reset": true, 00:16:56.902 "nvme_admin": false, 00:16:56.902 "nvme_io": false, 00:16:56.902 "nvme_io_md": false, 00:16:56.902 "write_zeroes": true, 00:16:56.902 "zcopy": true, 00:16:56.902 "get_zone_info": false, 00:16:56.902 "zone_management": false, 00:16:56.902 "zone_append": false, 00:16:56.902 "compare": false, 00:16:56.902 "compare_and_write": false, 00:16:56.902 "abort": true, 00:16:56.902 "seek_hole": false, 00:16:56.902 "seek_data": false, 00:16:56.902 "copy": true, 00:16:56.902 "nvme_iov_md": false 00:16:56.902 }, 00:16:56.902 "memory_domains": [ 00:16:56.902 { 00:16:56.902 "dma_device_id": "system", 00:16:56.902 "dma_device_type": 1 00:16:56.902 }, 00:16:56.902 { 00:16:56.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.902 "dma_device_type": 2 00:16:56.902 } 00:16:56.902 ], 00:16:56.902 "driver_specific": {} 00:16:56.902 } 00:16:56.902 ] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.902 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.902 "name": "Existed_Raid", 00:16:56.902 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:56.902 "strip_size_kb": 64, 00:16:56.902 "state": "configuring", 00:16:56.902 "raid_level": "raid5f", 00:16:56.902 "superblock": true, 00:16:56.902 "num_base_bdevs": 4, 00:16:56.902 "num_base_bdevs_discovered": 3, 00:16:56.902 "num_base_bdevs_operational": 4, 00:16:56.902 "base_bdevs_list": [ 00:16:56.902 { 00:16:56.902 "name": "BaseBdev1", 00:16:56.902 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:56.902 "is_configured": true, 00:16:56.902 "data_offset": 2048, 00:16:56.902 "data_size": 63488 00:16:56.902 }, 00:16:56.902 { 00:16:56.902 "name": null, 00:16:56.902 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:56.902 "is_configured": false, 00:16:56.902 "data_offset": 0, 00:16:56.902 "data_size": 63488 00:16:56.902 }, 00:16:56.902 { 00:16:56.902 "name": "BaseBdev3", 00:16:56.902 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:56.903 "is_configured": true, 00:16:56.903 "data_offset": 2048, 00:16:56.903 "data_size": 63488 00:16:56.903 }, 00:16:56.903 { 00:16:56.903 "name": "BaseBdev4", 00:16:56.903 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:56.903 "is_configured": true, 00:16:56.903 "data_offset": 2048, 00:16:56.903 "data_size": 63488 00:16:56.903 } 00:16:56.903 ] 00:16:56.903 }' 00:16:56.903 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.903 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 [2024-11-20 10:40:00.769946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.472 "name": "Existed_Raid", 00:16:57.472 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:57.472 "strip_size_kb": 64, 00:16:57.472 "state": "configuring", 00:16:57.472 "raid_level": "raid5f", 00:16:57.472 "superblock": true, 00:16:57.472 "num_base_bdevs": 4, 00:16:57.472 "num_base_bdevs_discovered": 2, 00:16:57.472 "num_base_bdevs_operational": 4, 00:16:57.472 "base_bdevs_list": [ 00:16:57.472 { 00:16:57.472 "name": "BaseBdev1", 00:16:57.472 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:57.472 "is_configured": true, 00:16:57.472 "data_offset": 2048, 00:16:57.472 "data_size": 63488 00:16:57.472 }, 00:16:57.472 { 00:16:57.472 "name": null, 00:16:57.472 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:57.472 "is_configured": false, 00:16:57.472 "data_offset": 0, 00:16:57.472 "data_size": 63488 00:16:57.472 }, 00:16:57.472 { 00:16:57.472 "name": null, 00:16:57.472 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:57.472 "is_configured": false, 00:16:57.472 "data_offset": 0, 00:16:57.472 "data_size": 63488 00:16:57.472 }, 00:16:57.472 { 00:16:57.472 "name": "BaseBdev4", 00:16:57.472 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:57.472 "is_configured": true, 00:16:57.472 "data_offset": 2048, 00:16:57.472 "data_size": 63488 00:16:57.472 } 00:16:57.472 ] 00:16:57.472 }' 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.472 10:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.040 [2024-11-20 10:40:01.273103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.040 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.041 "name": "Existed_Raid", 00:16:58.041 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:58.041 "strip_size_kb": 64, 00:16:58.041 "state": "configuring", 00:16:58.041 "raid_level": "raid5f", 00:16:58.041 "superblock": true, 00:16:58.041 "num_base_bdevs": 4, 00:16:58.041 "num_base_bdevs_discovered": 3, 00:16:58.041 "num_base_bdevs_operational": 4, 00:16:58.041 "base_bdevs_list": [ 00:16:58.041 { 00:16:58.041 "name": "BaseBdev1", 00:16:58.041 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:58.041 "is_configured": true, 00:16:58.041 "data_offset": 2048, 00:16:58.041 "data_size": 63488 00:16:58.041 }, 00:16:58.041 { 00:16:58.041 "name": null, 00:16:58.041 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:58.041 "is_configured": false, 00:16:58.041 "data_offset": 0, 00:16:58.041 "data_size": 63488 00:16:58.041 }, 00:16:58.041 { 00:16:58.041 "name": "BaseBdev3", 00:16:58.041 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:58.041 "is_configured": true, 00:16:58.041 "data_offset": 2048, 00:16:58.041 "data_size": 63488 00:16:58.041 }, 00:16:58.041 { 00:16:58.041 "name": "BaseBdev4", 00:16:58.041 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:58.041 "is_configured": true, 00:16:58.041 "data_offset": 2048, 00:16:58.041 "data_size": 63488 00:16:58.041 } 00:16:58.041 ] 00:16:58.041 }' 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.041 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.300 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.300 [2024-11-20 10:40:01.768286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.567 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.568 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.568 "name": "Existed_Raid", 00:16:58.568 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:58.568 "strip_size_kb": 64, 00:16:58.568 "state": "configuring", 00:16:58.568 "raid_level": "raid5f", 00:16:58.568 "superblock": true, 00:16:58.568 "num_base_bdevs": 4, 00:16:58.568 "num_base_bdevs_discovered": 2, 00:16:58.568 "num_base_bdevs_operational": 4, 00:16:58.568 "base_bdevs_list": [ 00:16:58.568 { 00:16:58.568 "name": null, 00:16:58.568 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:58.568 "is_configured": false, 00:16:58.568 "data_offset": 0, 00:16:58.568 "data_size": 63488 00:16:58.568 }, 00:16:58.568 { 00:16:58.568 "name": null, 00:16:58.568 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:58.568 "is_configured": false, 00:16:58.568 "data_offset": 0, 00:16:58.568 "data_size": 63488 00:16:58.568 }, 00:16:58.568 { 00:16:58.568 "name": "BaseBdev3", 00:16:58.568 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:58.568 "is_configured": true, 00:16:58.568 "data_offset": 2048, 00:16:58.568 "data_size": 63488 00:16:58.568 }, 00:16:58.568 { 00:16:58.568 "name": "BaseBdev4", 00:16:58.568 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:58.568 "is_configured": true, 00:16:58.568 "data_offset": 2048, 00:16:58.568 "data_size": 63488 00:16:58.568 } 00:16:58.568 ] 00:16:58.568 }' 00:16:58.568 10:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.568 10:40:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.847 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.847 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.847 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:58.847 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.847 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.106 [2024-11-20 10:40:02.334323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.106 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.107 "name": "Existed_Raid", 00:16:59.107 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:59.107 "strip_size_kb": 64, 00:16:59.107 "state": "configuring", 00:16:59.107 "raid_level": "raid5f", 00:16:59.107 "superblock": true, 00:16:59.107 "num_base_bdevs": 4, 00:16:59.107 "num_base_bdevs_discovered": 3, 00:16:59.107 "num_base_bdevs_operational": 4, 00:16:59.107 "base_bdevs_list": [ 00:16:59.107 { 00:16:59.107 "name": null, 00:16:59.107 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:59.107 "is_configured": false, 00:16:59.107 "data_offset": 0, 00:16:59.107 "data_size": 63488 00:16:59.107 }, 00:16:59.107 { 00:16:59.107 "name": "BaseBdev2", 00:16:59.107 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:59.107 "is_configured": true, 00:16:59.107 "data_offset": 2048, 00:16:59.107 "data_size": 63488 00:16:59.107 }, 00:16:59.107 { 00:16:59.107 "name": "BaseBdev3", 00:16:59.107 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:59.107 "is_configured": true, 00:16:59.107 "data_offset": 2048, 00:16:59.107 "data_size": 63488 00:16:59.107 }, 00:16:59.107 { 00:16:59.107 "name": "BaseBdev4", 00:16:59.107 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:59.107 "is_configured": true, 00:16:59.107 "data_offset": 2048, 00:16:59.107 "data_size": 63488 00:16:59.107 } 00:16:59.107 ] 00:16:59.107 }' 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.107 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.368 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:59.368 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.368 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.368 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.368 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f52276dc-b02c-4d7f-9054-7ab8f272df7a 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.629 [2024-11-20 10:40:02.936029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:59.629 [2024-11-20 10:40:02.936421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:59.629 [2024-11-20 10:40:02.936485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:59.629 [2024-11-20 10:40:02.936791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:59.629 NewBaseBdev 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.629 [2024-11-20 10:40:02.944398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:59.629 [2024-11-20 10:40:02.944466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:59.629 [2024-11-20 10:40:02.944815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.629 [ 00:16:59.629 { 00:16:59.629 "name": "NewBaseBdev", 00:16:59.629 "aliases": [ 00:16:59.629 "f52276dc-b02c-4d7f-9054-7ab8f272df7a" 00:16:59.629 ], 00:16:59.629 "product_name": "Malloc disk", 00:16:59.629 "block_size": 512, 00:16:59.629 "num_blocks": 65536, 00:16:59.629 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:59.629 "assigned_rate_limits": { 00:16:59.629 "rw_ios_per_sec": 0, 00:16:59.629 "rw_mbytes_per_sec": 0, 00:16:59.629 "r_mbytes_per_sec": 0, 00:16:59.629 "w_mbytes_per_sec": 0 00:16:59.629 }, 00:16:59.629 "claimed": true, 00:16:59.629 "claim_type": "exclusive_write", 00:16:59.629 "zoned": false, 00:16:59.629 "supported_io_types": { 00:16:59.629 "read": true, 00:16:59.629 "write": true, 00:16:59.629 "unmap": true, 00:16:59.629 "flush": true, 00:16:59.629 "reset": true, 00:16:59.629 "nvme_admin": false, 00:16:59.629 "nvme_io": false, 00:16:59.629 "nvme_io_md": false, 00:16:59.629 "write_zeroes": true, 00:16:59.629 "zcopy": true, 00:16:59.629 "get_zone_info": false, 00:16:59.629 "zone_management": false, 00:16:59.629 "zone_append": false, 00:16:59.629 "compare": false, 00:16:59.629 "compare_and_write": false, 00:16:59.629 "abort": true, 00:16:59.629 "seek_hole": false, 00:16:59.629 "seek_data": false, 00:16:59.629 "copy": true, 00:16:59.629 "nvme_iov_md": false 00:16:59.629 }, 00:16:59.629 "memory_domains": [ 00:16:59.629 { 00:16:59.629 "dma_device_id": "system", 00:16:59.629 "dma_device_type": 1 00:16:59.629 }, 00:16:59.629 { 00:16:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.629 "dma_device_type": 2 00:16:59.629 } 00:16:59.629 ], 00:16:59.629 "driver_specific": {} 00:16:59.629 } 00:16:59.629 ] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.629 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.630 10:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.630 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.630 10:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.630 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.630 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.630 "name": "Existed_Raid", 00:16:59.630 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:16:59.630 "strip_size_kb": 64, 00:16:59.630 "state": "online", 00:16:59.630 "raid_level": "raid5f", 00:16:59.630 "superblock": true, 00:16:59.630 "num_base_bdevs": 4, 00:16:59.630 "num_base_bdevs_discovered": 4, 00:16:59.630 "num_base_bdevs_operational": 4, 00:16:59.630 "base_bdevs_list": [ 00:16:59.630 { 00:16:59.630 "name": "NewBaseBdev", 00:16:59.630 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:16:59.630 "is_configured": true, 00:16:59.630 "data_offset": 2048, 00:16:59.630 "data_size": 63488 00:16:59.630 }, 00:16:59.630 { 00:16:59.630 "name": "BaseBdev2", 00:16:59.630 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:16:59.630 "is_configured": true, 00:16:59.630 "data_offset": 2048, 00:16:59.630 "data_size": 63488 00:16:59.630 }, 00:16:59.630 { 00:16:59.630 "name": "BaseBdev3", 00:16:59.630 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:16:59.630 "is_configured": true, 00:16:59.630 "data_offset": 2048, 00:16:59.630 "data_size": 63488 00:16:59.630 }, 00:16:59.630 { 00:16:59.630 "name": "BaseBdev4", 00:16:59.630 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:16:59.630 "is_configured": true, 00:16:59.630 "data_offset": 2048, 00:16:59.630 "data_size": 63488 00:16:59.630 } 00:16:59.630 ] 00:16:59.630 }' 00:16:59.630 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.630 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.200 [2024-11-20 10:40:03.476870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.200 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.200 "name": "Existed_Raid", 00:17:00.200 "aliases": [ 00:17:00.200 "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b" 00:17:00.200 ], 00:17:00.200 "product_name": "Raid Volume", 00:17:00.200 "block_size": 512, 00:17:00.200 "num_blocks": 190464, 00:17:00.200 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:17:00.200 "assigned_rate_limits": { 00:17:00.200 "rw_ios_per_sec": 0, 00:17:00.200 "rw_mbytes_per_sec": 0, 00:17:00.200 "r_mbytes_per_sec": 0, 00:17:00.200 "w_mbytes_per_sec": 0 00:17:00.200 }, 00:17:00.200 "claimed": false, 00:17:00.200 "zoned": false, 00:17:00.200 "supported_io_types": { 00:17:00.200 "read": true, 00:17:00.200 "write": true, 00:17:00.200 "unmap": false, 00:17:00.200 "flush": false, 00:17:00.200 "reset": true, 00:17:00.200 "nvme_admin": false, 00:17:00.200 "nvme_io": false, 00:17:00.200 "nvme_io_md": false, 00:17:00.200 "write_zeroes": true, 00:17:00.200 "zcopy": false, 00:17:00.200 "get_zone_info": false, 00:17:00.200 "zone_management": false, 00:17:00.200 "zone_append": false, 00:17:00.200 "compare": false, 00:17:00.200 "compare_and_write": false, 00:17:00.200 "abort": false, 00:17:00.200 "seek_hole": false, 00:17:00.200 "seek_data": false, 00:17:00.200 "copy": false, 00:17:00.200 "nvme_iov_md": false 00:17:00.200 }, 00:17:00.200 "driver_specific": { 00:17:00.200 "raid": { 00:17:00.200 "uuid": "f0d4f361-e0b9-4ac5-b586-2dd30ba86f2b", 00:17:00.200 "strip_size_kb": 64, 00:17:00.200 "state": "online", 00:17:00.200 "raid_level": "raid5f", 00:17:00.200 "superblock": true, 00:17:00.200 "num_base_bdevs": 4, 00:17:00.200 "num_base_bdevs_discovered": 4, 00:17:00.200 "num_base_bdevs_operational": 4, 00:17:00.200 "base_bdevs_list": [ 00:17:00.200 { 00:17:00.200 "name": "NewBaseBdev", 00:17:00.200 "uuid": "f52276dc-b02c-4d7f-9054-7ab8f272df7a", 00:17:00.200 "is_configured": true, 00:17:00.200 "data_offset": 2048, 00:17:00.200 "data_size": 63488 00:17:00.200 }, 00:17:00.201 { 00:17:00.201 "name": "BaseBdev2", 00:17:00.201 "uuid": "88d3d76d-8004-40c7-9dac-ce374b3adaa1", 00:17:00.201 "is_configured": true, 00:17:00.201 "data_offset": 2048, 00:17:00.201 "data_size": 63488 00:17:00.201 }, 00:17:00.201 { 00:17:00.201 "name": "BaseBdev3", 00:17:00.201 "uuid": "4b937429-5efb-4ecd-a207-04392711d89a", 00:17:00.201 "is_configured": true, 00:17:00.201 "data_offset": 2048, 00:17:00.201 "data_size": 63488 00:17:00.201 }, 00:17:00.201 { 00:17:00.201 "name": "BaseBdev4", 00:17:00.201 "uuid": "68e2d63a-8f2d-4928-b3dc-4868602f5419", 00:17:00.201 "is_configured": true, 00:17:00.201 "data_offset": 2048, 00:17:00.201 "data_size": 63488 00:17:00.201 } 00:17:00.201 ] 00:17:00.201 } 00:17:00.201 } 00:17:00.201 }' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:00.201 BaseBdev2 00:17:00.201 BaseBdev3 00:17:00.201 BaseBdev4' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.201 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.461 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.462 [2024-11-20 10:40:03.808078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.462 [2024-11-20 10:40:03.808153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.462 [2024-11-20 10:40:03.808244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.462 [2024-11-20 10:40:03.808563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.462 [2024-11-20 10:40:03.808576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83605 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83605 ']' 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83605 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83605 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83605' 00:17:00.462 killing process with pid 83605 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83605 00:17:00.462 [2024-11-20 10:40:03.845574] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.462 10:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83605 00:17:01.032 [2024-11-20 10:40:04.233370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.972 ************************************ 00:17:01.972 END TEST raid5f_state_function_test_sb 00:17:01.972 ************************************ 00:17:01.972 10:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:01.972 00:17:01.972 real 0m11.629s 00:17:01.972 user 0m18.533s 00:17:01.972 sys 0m2.069s 00:17:01.972 10:40:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.972 10:40:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.972 10:40:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:01.972 10:40:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:01.972 10:40:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.972 10:40:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.972 ************************************ 00:17:01.972 START TEST raid5f_superblock_test 00:17:01.972 ************************************ 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:01.972 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84276 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84276 00:17:01.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84276 ']' 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.973 10:40:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.232 [2024-11-20 10:40:05.481105] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:02.232 [2024-11-20 10:40:05.481230] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84276 ] 00:17:02.233 [2024-11-20 10:40:05.655546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.492 [2024-11-20 10:40:05.767367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.492 [2024-11-20 10:40:05.959302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.492 [2024-11-20 10:40:05.959386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 malloc1 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 [2024-11-20 10:40:06.360812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.063 [2024-11-20 10:40:06.360941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.063 [2024-11-20 10:40:06.360989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.063 [2024-11-20 10:40:06.361026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.063 [2024-11-20 10:40:06.363232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.063 [2024-11-20 10:40:06.363308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.063 pt1 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 malloc2 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 [2024-11-20 10:40:06.417684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.063 [2024-11-20 10:40:06.417802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.063 [2024-11-20 10:40:06.417839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:03.063 [2024-11-20 10:40:06.417868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.063 [2024-11-20 10:40:06.419875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.063 [2024-11-20 10:40:06.419946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.063 pt2 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 malloc3 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 [2024-11-20 10:40:06.484550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.063 [2024-11-20 10:40:06.484649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.063 [2024-11-20 10:40:06.484685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:03.063 [2024-11-20 10:40:06.484712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.063 [2024-11-20 10:40:06.486700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.063 [2024-11-20 10:40:06.486771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.063 pt3 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.063 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.064 malloc4 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.064 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.324 [2024-11-20 10:40:06.539967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:03.324 [2024-11-20 10:40:06.540068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.324 [2024-11-20 10:40:06.540105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:03.324 [2024-11-20 10:40:06.540132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.324 [2024-11-20 10:40:06.542228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.324 [2024-11-20 10:40:06.542296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:03.324 pt4 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.324 [2024-11-20 10:40:06.551980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.324 [2024-11-20 10:40:06.553734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.324 [2024-11-20 10:40:06.553854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.324 [2024-11-20 10:40:06.553935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:03.324 [2024-11-20 10:40:06.554123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:03.324 [2024-11-20 10:40:06.554139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.324 [2024-11-20 10:40:06.554411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.324 [2024-11-20 10:40:06.561931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:03.324 [2024-11-20 10:40:06.561953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:03.324 [2024-11-20 10:40:06.562147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.324 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.324 "name": "raid_bdev1", 00:17:03.324 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:03.324 "strip_size_kb": 64, 00:17:03.324 "state": "online", 00:17:03.324 "raid_level": "raid5f", 00:17:03.324 "superblock": true, 00:17:03.324 "num_base_bdevs": 4, 00:17:03.324 "num_base_bdevs_discovered": 4, 00:17:03.324 "num_base_bdevs_operational": 4, 00:17:03.324 "base_bdevs_list": [ 00:17:03.324 { 00:17:03.324 "name": "pt1", 00:17:03.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.324 "is_configured": true, 00:17:03.324 "data_offset": 2048, 00:17:03.324 "data_size": 63488 00:17:03.324 }, 00:17:03.324 { 00:17:03.324 "name": "pt2", 00:17:03.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.324 "is_configured": true, 00:17:03.324 "data_offset": 2048, 00:17:03.324 "data_size": 63488 00:17:03.324 }, 00:17:03.325 { 00:17:03.325 "name": "pt3", 00:17:03.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.325 "is_configured": true, 00:17:03.325 "data_offset": 2048, 00:17:03.325 "data_size": 63488 00:17:03.325 }, 00:17:03.325 { 00:17:03.325 "name": "pt4", 00:17:03.325 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.325 "is_configured": true, 00:17:03.325 "data_offset": 2048, 00:17:03.325 "data_size": 63488 00:17:03.325 } 00:17:03.325 ] 00:17:03.325 }' 00:17:03.325 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.325 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 [2024-11-20 10:40:06.966199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.585 10:40:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.585 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.585 "name": "raid_bdev1", 00:17:03.585 "aliases": [ 00:17:03.585 "9a61aae1-ab59-4bc9-821a-53b130964313" 00:17:03.585 ], 00:17:03.585 "product_name": "Raid Volume", 00:17:03.585 "block_size": 512, 00:17:03.585 "num_blocks": 190464, 00:17:03.585 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:03.585 "assigned_rate_limits": { 00:17:03.585 "rw_ios_per_sec": 0, 00:17:03.585 "rw_mbytes_per_sec": 0, 00:17:03.585 "r_mbytes_per_sec": 0, 00:17:03.585 "w_mbytes_per_sec": 0 00:17:03.585 }, 00:17:03.585 "claimed": false, 00:17:03.585 "zoned": false, 00:17:03.585 "supported_io_types": { 00:17:03.585 "read": true, 00:17:03.585 "write": true, 00:17:03.585 "unmap": false, 00:17:03.585 "flush": false, 00:17:03.585 "reset": true, 00:17:03.585 "nvme_admin": false, 00:17:03.585 "nvme_io": false, 00:17:03.585 "nvme_io_md": false, 00:17:03.585 "write_zeroes": true, 00:17:03.585 "zcopy": false, 00:17:03.585 "get_zone_info": false, 00:17:03.585 "zone_management": false, 00:17:03.585 "zone_append": false, 00:17:03.585 "compare": false, 00:17:03.585 "compare_and_write": false, 00:17:03.585 "abort": false, 00:17:03.585 "seek_hole": false, 00:17:03.585 "seek_data": false, 00:17:03.585 "copy": false, 00:17:03.585 "nvme_iov_md": false 00:17:03.585 }, 00:17:03.585 "driver_specific": { 00:17:03.585 "raid": { 00:17:03.585 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:03.585 "strip_size_kb": 64, 00:17:03.585 "state": "online", 00:17:03.585 "raid_level": "raid5f", 00:17:03.585 "superblock": true, 00:17:03.585 "num_base_bdevs": 4, 00:17:03.585 "num_base_bdevs_discovered": 4, 00:17:03.585 "num_base_bdevs_operational": 4, 00:17:03.585 "base_bdevs_list": [ 00:17:03.585 { 00:17:03.585 "name": "pt1", 00:17:03.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.585 "is_configured": true, 00:17:03.585 "data_offset": 2048, 00:17:03.585 "data_size": 63488 00:17:03.585 }, 00:17:03.585 { 00:17:03.585 "name": "pt2", 00:17:03.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.585 "is_configured": true, 00:17:03.585 "data_offset": 2048, 00:17:03.585 "data_size": 63488 00:17:03.585 }, 00:17:03.585 { 00:17:03.585 "name": "pt3", 00:17:03.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.585 "is_configured": true, 00:17:03.585 "data_offset": 2048, 00:17:03.585 "data_size": 63488 00:17:03.585 }, 00:17:03.585 { 00:17:03.585 "name": "pt4", 00:17:03.585 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.585 "is_configured": true, 00:17:03.585 "data_offset": 2048, 00:17:03.585 "data_size": 63488 00:17:03.585 } 00:17:03.585 ] 00:17:03.585 } 00:17:03.585 } 00:17:03.585 }' 00:17:03.585 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.585 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:03.585 pt2 00:17:03.585 pt3 00:17:03.585 pt4' 00:17:03.585 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:03.846 [2024-11-20 10:40:07.309601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.846 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.106 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a61aae1-ab59-4bc9-821a-53b130964313 00:17:04.106 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a61aae1-ab59-4bc9-821a-53b130964313 ']' 00:17:04.106 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 [2024-11-20 10:40:07.357320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.107 [2024-11-20 10:40:07.357426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.107 [2024-11-20 10:40:07.357545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.107 [2024-11-20 10:40:07.357697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.107 [2024-11-20 10:40:07.357753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 [2024-11-20 10:40:07.521052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:04.107 [2024-11-20 10:40:07.523235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:04.107 [2024-11-20 10:40:07.523351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:04.107 [2024-11-20 10:40:07.523488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:04.107 [2024-11-20 10:40:07.523598] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:04.107 [2024-11-20 10:40:07.523751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:04.107 [2024-11-20 10:40:07.523832] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:04.107 [2024-11-20 10:40:07.523920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:04.107 [2024-11-20 10:40:07.523997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.107 [2024-11-20 10:40:07.524040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:04.107 request: 00:17:04.107 { 00:17:04.107 "name": "raid_bdev1", 00:17:04.107 "raid_level": "raid5f", 00:17:04.107 "base_bdevs": [ 00:17:04.107 "malloc1", 00:17:04.107 "malloc2", 00:17:04.107 "malloc3", 00:17:04.107 "malloc4" 00:17:04.107 ], 00:17:04.107 "strip_size_kb": 64, 00:17:04.107 "superblock": false, 00:17:04.107 "method": "bdev_raid_create", 00:17:04.107 "req_id": 1 00:17:04.107 } 00:17:04.107 Got JSON-RPC error response 00:17:04.107 response: 00:17:04.107 { 00:17:04.107 "code": -17, 00:17:04.107 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:04.107 } 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.107 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.368 [2024-11-20 10:40:07.588907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:04.368 [2024-11-20 10:40:07.589057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.368 [2024-11-20 10:40:07.589080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:04.368 [2024-11-20 10:40:07.589090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.368 [2024-11-20 10:40:07.591247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.368 [2024-11-20 10:40:07.591290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:04.368 [2024-11-20 10:40:07.591406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:04.368 [2024-11-20 10:40:07.591483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:04.368 pt1 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.368 "name": "raid_bdev1", 00:17:04.368 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:04.368 "strip_size_kb": 64, 00:17:04.368 "state": "configuring", 00:17:04.368 "raid_level": "raid5f", 00:17:04.368 "superblock": true, 00:17:04.368 "num_base_bdevs": 4, 00:17:04.368 "num_base_bdevs_discovered": 1, 00:17:04.368 "num_base_bdevs_operational": 4, 00:17:04.368 "base_bdevs_list": [ 00:17:04.368 { 00:17:04.368 "name": "pt1", 00:17:04.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.368 "is_configured": true, 00:17:04.368 "data_offset": 2048, 00:17:04.368 "data_size": 63488 00:17:04.368 }, 00:17:04.368 { 00:17:04.368 "name": null, 00:17:04.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.368 "is_configured": false, 00:17:04.368 "data_offset": 2048, 00:17:04.368 "data_size": 63488 00:17:04.368 }, 00:17:04.368 { 00:17:04.368 "name": null, 00:17:04.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.368 "is_configured": false, 00:17:04.368 "data_offset": 2048, 00:17:04.368 "data_size": 63488 00:17:04.368 }, 00:17:04.368 { 00:17:04.368 "name": null, 00:17:04.368 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.368 "is_configured": false, 00:17:04.368 "data_offset": 2048, 00:17:04.368 "data_size": 63488 00:17:04.368 } 00:17:04.368 ] 00:17:04.368 }' 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.368 10:40:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.629 [2024-11-20 10:40:08.052128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.629 [2024-11-20 10:40:08.052286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.629 [2024-11-20 10:40:08.052329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:04.629 [2024-11-20 10:40:08.052383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.629 [2024-11-20 10:40:08.052954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.629 [2024-11-20 10:40:08.053018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.629 [2024-11-20 10:40:08.053128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.629 [2024-11-20 10:40:08.053182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.629 pt2 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.629 [2024-11-20 10:40:08.064125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.629 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.890 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.890 "name": "raid_bdev1", 00:17:04.890 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:04.890 "strip_size_kb": 64, 00:17:04.890 "state": "configuring", 00:17:04.890 "raid_level": "raid5f", 00:17:04.890 "superblock": true, 00:17:04.890 "num_base_bdevs": 4, 00:17:04.890 "num_base_bdevs_discovered": 1, 00:17:04.890 "num_base_bdevs_operational": 4, 00:17:04.890 "base_bdevs_list": [ 00:17:04.890 { 00:17:04.890 "name": "pt1", 00:17:04.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.890 "is_configured": true, 00:17:04.890 "data_offset": 2048, 00:17:04.890 "data_size": 63488 00:17:04.890 }, 00:17:04.890 { 00:17:04.890 "name": null, 00:17:04.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.890 "is_configured": false, 00:17:04.890 "data_offset": 0, 00:17:04.890 "data_size": 63488 00:17:04.890 }, 00:17:04.890 { 00:17:04.890 "name": null, 00:17:04.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.890 "is_configured": false, 00:17:04.890 "data_offset": 2048, 00:17:04.890 "data_size": 63488 00:17:04.890 }, 00:17:04.890 { 00:17:04.890 "name": null, 00:17:04.890 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.890 "is_configured": false, 00:17:04.890 "data_offset": 2048, 00:17:04.890 "data_size": 63488 00:17:04.890 } 00:17:04.890 ] 00:17:04.890 }' 00:17:04.890 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.890 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.152 [2024-11-20 10:40:08.527333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.152 [2024-11-20 10:40:08.527433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.152 [2024-11-20 10:40:08.527455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:05.152 [2024-11-20 10:40:08.527464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.152 [2024-11-20 10:40:08.527904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.152 [2024-11-20 10:40:08.527922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.152 [2024-11-20 10:40:08.528003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:05.152 [2024-11-20 10:40:08.528023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.152 pt2 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.152 [2024-11-20 10:40:08.535289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.152 [2024-11-20 10:40:08.535342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.152 [2024-11-20 10:40:08.535379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:05.152 [2024-11-20 10:40:08.535388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.152 [2024-11-20 10:40:08.535737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.152 [2024-11-20 10:40:08.535753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.152 [2024-11-20 10:40:08.535815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:05.152 [2024-11-20 10:40:08.535832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.152 pt3 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.152 [2024-11-20 10:40:08.543249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:05.152 [2024-11-20 10:40:08.543342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.152 [2024-11-20 10:40:08.543389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:05.152 [2024-11-20 10:40:08.543398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.152 [2024-11-20 10:40:08.543766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.152 [2024-11-20 10:40:08.543783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:05.152 [2024-11-20 10:40:08.543847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:05.152 [2024-11-20 10:40:08.543864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:05.152 [2024-11-20 10:40:08.543987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:05.152 [2024-11-20 10:40:08.543996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:05.152 [2024-11-20 10:40:08.544218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:05.152 [2024-11-20 10:40:08.550852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:05.152 [2024-11-20 10:40:08.550875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:05.152 [2024-11-20 10:40:08.551045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.152 pt4 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.152 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.152 "name": "raid_bdev1", 00:17:05.152 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:05.152 "strip_size_kb": 64, 00:17:05.152 "state": "online", 00:17:05.152 "raid_level": "raid5f", 00:17:05.152 "superblock": true, 00:17:05.152 "num_base_bdevs": 4, 00:17:05.152 "num_base_bdevs_discovered": 4, 00:17:05.152 "num_base_bdevs_operational": 4, 00:17:05.152 "base_bdevs_list": [ 00:17:05.152 { 00:17:05.152 "name": "pt1", 00:17:05.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.152 "is_configured": true, 00:17:05.152 "data_offset": 2048, 00:17:05.152 "data_size": 63488 00:17:05.152 }, 00:17:05.152 { 00:17:05.152 "name": "pt2", 00:17:05.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.152 "is_configured": true, 00:17:05.152 "data_offset": 2048, 00:17:05.152 "data_size": 63488 00:17:05.152 }, 00:17:05.152 { 00:17:05.152 "name": "pt3", 00:17:05.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.152 "is_configured": true, 00:17:05.153 "data_offset": 2048, 00:17:05.153 "data_size": 63488 00:17:05.153 }, 00:17:05.153 { 00:17:05.153 "name": "pt4", 00:17:05.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.153 "is_configured": true, 00:17:05.153 "data_offset": 2048, 00:17:05.153 "data_size": 63488 00:17:05.153 } 00:17:05.153 ] 00:17:05.153 }' 00:17:05.153 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.153 10:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.723 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.723 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:05.723 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.723 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.723 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.723 10:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.723 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.723 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.723 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.723 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.723 [2024-11-20 10:40:09.006771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.723 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.723 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.723 "name": "raid_bdev1", 00:17:05.723 "aliases": [ 00:17:05.723 "9a61aae1-ab59-4bc9-821a-53b130964313" 00:17:05.723 ], 00:17:05.723 "product_name": "Raid Volume", 00:17:05.723 "block_size": 512, 00:17:05.723 "num_blocks": 190464, 00:17:05.723 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:05.723 "assigned_rate_limits": { 00:17:05.723 "rw_ios_per_sec": 0, 00:17:05.723 "rw_mbytes_per_sec": 0, 00:17:05.723 "r_mbytes_per_sec": 0, 00:17:05.723 "w_mbytes_per_sec": 0 00:17:05.723 }, 00:17:05.723 "claimed": false, 00:17:05.723 "zoned": false, 00:17:05.723 "supported_io_types": { 00:17:05.723 "read": true, 00:17:05.723 "write": true, 00:17:05.723 "unmap": false, 00:17:05.723 "flush": false, 00:17:05.723 "reset": true, 00:17:05.723 "nvme_admin": false, 00:17:05.723 "nvme_io": false, 00:17:05.723 "nvme_io_md": false, 00:17:05.723 "write_zeroes": true, 00:17:05.723 "zcopy": false, 00:17:05.723 "get_zone_info": false, 00:17:05.724 "zone_management": false, 00:17:05.724 "zone_append": false, 00:17:05.724 "compare": false, 00:17:05.724 "compare_and_write": false, 00:17:05.724 "abort": false, 00:17:05.724 "seek_hole": false, 00:17:05.724 "seek_data": false, 00:17:05.724 "copy": false, 00:17:05.724 "nvme_iov_md": false 00:17:05.724 }, 00:17:05.724 "driver_specific": { 00:17:05.724 "raid": { 00:17:05.724 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:05.724 "strip_size_kb": 64, 00:17:05.724 "state": "online", 00:17:05.724 "raid_level": "raid5f", 00:17:05.724 "superblock": true, 00:17:05.724 "num_base_bdevs": 4, 00:17:05.724 "num_base_bdevs_discovered": 4, 00:17:05.724 "num_base_bdevs_operational": 4, 00:17:05.724 "base_bdevs_list": [ 00:17:05.724 { 00:17:05.724 "name": "pt1", 00:17:05.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.724 "is_configured": true, 00:17:05.724 "data_offset": 2048, 00:17:05.724 "data_size": 63488 00:17:05.724 }, 00:17:05.724 { 00:17:05.724 "name": "pt2", 00:17:05.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.724 "is_configured": true, 00:17:05.724 "data_offset": 2048, 00:17:05.724 "data_size": 63488 00:17:05.724 }, 00:17:05.724 { 00:17:05.724 "name": "pt3", 00:17:05.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.724 "is_configured": true, 00:17:05.724 "data_offset": 2048, 00:17:05.724 "data_size": 63488 00:17:05.724 }, 00:17:05.724 { 00:17:05.724 "name": "pt4", 00:17:05.724 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.724 "is_configured": true, 00:17:05.724 "data_offset": 2048, 00:17:05.724 "data_size": 63488 00:17:05.724 } 00:17:05.724 ] 00:17:05.724 } 00:17:05.724 } 00:17:05.724 }' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:05.724 pt2 00:17:05.724 pt3 00:17:05.724 pt4' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.724 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.983 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.983 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.983 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.984 [2024-11-20 10:40:09.318297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a61aae1-ab59-4bc9-821a-53b130964313 '!=' 9a61aae1-ab59-4bc9-821a-53b130964313 ']' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.984 [2024-11-20 10:40:09.362055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.984 "name": "raid_bdev1", 00:17:05.984 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:05.984 "strip_size_kb": 64, 00:17:05.984 "state": "online", 00:17:05.984 "raid_level": "raid5f", 00:17:05.984 "superblock": true, 00:17:05.984 "num_base_bdevs": 4, 00:17:05.984 "num_base_bdevs_discovered": 3, 00:17:05.984 "num_base_bdevs_operational": 3, 00:17:05.984 "base_bdevs_list": [ 00:17:05.984 { 00:17:05.984 "name": null, 00:17:05.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.984 "is_configured": false, 00:17:05.984 "data_offset": 0, 00:17:05.984 "data_size": 63488 00:17:05.984 }, 00:17:05.984 { 00:17:05.984 "name": "pt2", 00:17:05.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.984 "is_configured": true, 00:17:05.984 "data_offset": 2048, 00:17:05.984 "data_size": 63488 00:17:05.984 }, 00:17:05.984 { 00:17:05.984 "name": "pt3", 00:17:05.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.984 "is_configured": true, 00:17:05.984 "data_offset": 2048, 00:17:05.984 "data_size": 63488 00:17:05.984 }, 00:17:05.984 { 00:17:05.984 "name": "pt4", 00:17:05.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.984 "is_configured": true, 00:17:05.984 "data_offset": 2048, 00:17:05.984 "data_size": 63488 00:17:05.984 } 00:17:05.984 ] 00:17:05.984 }' 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.984 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 [2024-11-20 10:40:09.789336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.556 [2024-11-20 10:40:09.789437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.556 [2024-11-20 10:40:09.789537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.556 [2024-11-20 10:40:09.789648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.556 [2024-11-20 10:40:09.789709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 [2024-11-20 10:40:09.885146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.556 [2024-11-20 10:40:09.885207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.556 [2024-11-20 10:40:09.885226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:06.556 [2024-11-20 10:40:09.885234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.556 [2024-11-20 10:40:09.887464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.556 [2024-11-20 10:40:09.887552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.556 [2024-11-20 10:40:09.887648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:06.556 [2024-11-20 10:40:09.887714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.556 pt2 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.556 "name": "raid_bdev1", 00:17:06.556 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:06.556 "strip_size_kb": 64, 00:17:06.556 "state": "configuring", 00:17:06.556 "raid_level": "raid5f", 00:17:06.556 "superblock": true, 00:17:06.556 "num_base_bdevs": 4, 00:17:06.556 "num_base_bdevs_discovered": 1, 00:17:06.556 "num_base_bdevs_operational": 3, 00:17:06.556 "base_bdevs_list": [ 00:17:06.556 { 00:17:06.556 "name": null, 00:17:06.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.556 "is_configured": false, 00:17:06.556 "data_offset": 2048, 00:17:06.556 "data_size": 63488 00:17:06.556 }, 00:17:06.556 { 00:17:06.556 "name": "pt2", 00:17:06.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.556 "is_configured": true, 00:17:06.556 "data_offset": 2048, 00:17:06.556 "data_size": 63488 00:17:06.556 }, 00:17:06.556 { 00:17:06.556 "name": null, 00:17:06.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.556 "is_configured": false, 00:17:06.556 "data_offset": 2048, 00:17:06.556 "data_size": 63488 00:17:06.556 }, 00:17:06.556 { 00:17:06.556 "name": null, 00:17:06.556 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.556 "is_configured": false, 00:17:06.556 "data_offset": 2048, 00:17:06.556 "data_size": 63488 00:17:06.556 } 00:17:06.556 ] 00:17:06.556 }' 00:17:06.556 10:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.557 10:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.127 [2024-11-20 10:40:10.328413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:07.127 [2024-11-20 10:40:10.328524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.127 [2024-11-20 10:40:10.328562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:07.127 [2024-11-20 10:40:10.328588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.127 [2024-11-20 10:40:10.329036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.127 [2024-11-20 10:40:10.329094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:07.127 [2024-11-20 10:40:10.329207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:07.127 [2024-11-20 10:40:10.329263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.127 pt3 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.127 "name": "raid_bdev1", 00:17:07.127 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:07.127 "strip_size_kb": 64, 00:17:07.127 "state": "configuring", 00:17:07.127 "raid_level": "raid5f", 00:17:07.127 "superblock": true, 00:17:07.127 "num_base_bdevs": 4, 00:17:07.127 "num_base_bdevs_discovered": 2, 00:17:07.127 "num_base_bdevs_operational": 3, 00:17:07.127 "base_bdevs_list": [ 00:17:07.127 { 00:17:07.127 "name": null, 00:17:07.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.127 "is_configured": false, 00:17:07.127 "data_offset": 2048, 00:17:07.127 "data_size": 63488 00:17:07.127 }, 00:17:07.127 { 00:17:07.127 "name": "pt2", 00:17:07.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.127 "is_configured": true, 00:17:07.127 "data_offset": 2048, 00:17:07.127 "data_size": 63488 00:17:07.127 }, 00:17:07.127 { 00:17:07.127 "name": "pt3", 00:17:07.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.127 "is_configured": true, 00:17:07.127 "data_offset": 2048, 00:17:07.127 "data_size": 63488 00:17:07.127 }, 00:17:07.127 { 00:17:07.127 "name": null, 00:17:07.127 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.127 "is_configured": false, 00:17:07.127 "data_offset": 2048, 00:17:07.127 "data_size": 63488 00:17:07.127 } 00:17:07.127 ] 00:17:07.127 }' 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.127 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.388 [2024-11-20 10:40:10.759665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:07.388 [2024-11-20 10:40:10.759771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.388 [2024-11-20 10:40:10.759797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:07.388 [2024-11-20 10:40:10.759806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.388 [2024-11-20 10:40:10.760236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.388 [2024-11-20 10:40:10.760253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:07.388 [2024-11-20 10:40:10.760336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:07.388 [2024-11-20 10:40:10.760369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:07.388 [2024-11-20 10:40:10.760505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:07.388 [2024-11-20 10:40:10.760514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.388 [2024-11-20 10:40:10.760740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:07.388 [2024-11-20 10:40:10.767582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:07.388 [2024-11-20 10:40:10.767608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:07.388 [2024-11-20 10:40:10.767883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.388 pt4 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.388 "name": "raid_bdev1", 00:17:07.388 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:07.388 "strip_size_kb": 64, 00:17:07.388 "state": "online", 00:17:07.388 "raid_level": "raid5f", 00:17:07.388 "superblock": true, 00:17:07.388 "num_base_bdevs": 4, 00:17:07.388 "num_base_bdevs_discovered": 3, 00:17:07.388 "num_base_bdevs_operational": 3, 00:17:07.388 "base_bdevs_list": [ 00:17:07.388 { 00:17:07.388 "name": null, 00:17:07.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.388 "is_configured": false, 00:17:07.388 "data_offset": 2048, 00:17:07.388 "data_size": 63488 00:17:07.388 }, 00:17:07.388 { 00:17:07.388 "name": "pt2", 00:17:07.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.388 "is_configured": true, 00:17:07.388 "data_offset": 2048, 00:17:07.388 "data_size": 63488 00:17:07.388 }, 00:17:07.388 { 00:17:07.388 "name": "pt3", 00:17:07.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.388 "is_configured": true, 00:17:07.388 "data_offset": 2048, 00:17:07.388 "data_size": 63488 00:17:07.388 }, 00:17:07.388 { 00:17:07.388 "name": "pt4", 00:17:07.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.388 "is_configured": true, 00:17:07.388 "data_offset": 2048, 00:17:07.388 "data_size": 63488 00:17:07.388 } 00:17:07.388 ] 00:17:07.388 }' 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.388 10:40:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 [2024-11-20 10:40:11.200118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.958 [2024-11-20 10:40:11.200190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.958 [2024-11-20 10:40:11.200275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.958 [2024-11-20 10:40:11.200389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.958 [2024-11-20 10:40:11.200481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 [2024-11-20 10:40:11.275986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.958 [2024-11-20 10:40:11.276043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.958 [2024-11-20 10:40:11.276067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:07.958 [2024-11-20 10:40:11.276078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.958 [2024-11-20 10:40:11.278244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.958 [2024-11-20 10:40:11.278284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.958 [2024-11-20 10:40:11.278350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:07.958 [2024-11-20 10:40:11.278414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.958 [2024-11-20 10:40:11.278574] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:07.958 [2024-11-20 10:40:11.278588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.958 [2024-11-20 10:40:11.278602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:07.958 [2024-11-20 10:40:11.278661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.958 [2024-11-20 10:40:11.278758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.958 pt1 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.958 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.958 "name": "raid_bdev1", 00:17:07.958 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:07.958 "strip_size_kb": 64, 00:17:07.958 "state": "configuring", 00:17:07.958 "raid_level": "raid5f", 00:17:07.958 "superblock": true, 00:17:07.958 "num_base_bdevs": 4, 00:17:07.958 "num_base_bdevs_discovered": 2, 00:17:07.958 "num_base_bdevs_operational": 3, 00:17:07.958 "base_bdevs_list": [ 00:17:07.958 { 00:17:07.958 "name": null, 00:17:07.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.958 "is_configured": false, 00:17:07.958 "data_offset": 2048, 00:17:07.958 "data_size": 63488 00:17:07.958 }, 00:17:07.958 { 00:17:07.958 "name": "pt2", 00:17:07.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.958 "is_configured": true, 00:17:07.958 "data_offset": 2048, 00:17:07.958 "data_size": 63488 00:17:07.958 }, 00:17:07.958 { 00:17:07.958 "name": "pt3", 00:17:07.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.958 "is_configured": true, 00:17:07.958 "data_offset": 2048, 00:17:07.958 "data_size": 63488 00:17:07.958 }, 00:17:07.958 { 00:17:07.958 "name": null, 00:17:07.959 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.959 "is_configured": false, 00:17:07.959 "data_offset": 2048, 00:17:07.959 "data_size": 63488 00:17:07.959 } 00:17:07.959 ] 00:17:07.959 }' 00:17:07.959 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.959 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.527 [2024-11-20 10:40:11.783231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:08.527 [2024-11-20 10:40:11.783348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.527 [2024-11-20 10:40:11.783409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:08.527 [2024-11-20 10:40:11.783439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.527 [2024-11-20 10:40:11.783961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.527 [2024-11-20 10:40:11.784026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:08.527 [2024-11-20 10:40:11.784154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:08.527 [2024-11-20 10:40:11.784220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:08.527 [2024-11-20 10:40:11.784436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:08.527 [2024-11-20 10:40:11.784480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:08.527 [2024-11-20 10:40:11.784779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:08.527 [2024-11-20 10:40:11.792137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:08.527 [2024-11-20 10:40:11.792203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:08.527 [2024-11-20 10:40:11.792537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.527 pt4 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.527 "name": "raid_bdev1", 00:17:08.527 "uuid": "9a61aae1-ab59-4bc9-821a-53b130964313", 00:17:08.527 "strip_size_kb": 64, 00:17:08.527 "state": "online", 00:17:08.527 "raid_level": "raid5f", 00:17:08.527 "superblock": true, 00:17:08.527 "num_base_bdevs": 4, 00:17:08.527 "num_base_bdevs_discovered": 3, 00:17:08.527 "num_base_bdevs_operational": 3, 00:17:08.527 "base_bdevs_list": [ 00:17:08.527 { 00:17:08.527 "name": null, 00:17:08.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.527 "is_configured": false, 00:17:08.527 "data_offset": 2048, 00:17:08.527 "data_size": 63488 00:17:08.527 }, 00:17:08.527 { 00:17:08.527 "name": "pt2", 00:17:08.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.527 "is_configured": true, 00:17:08.527 "data_offset": 2048, 00:17:08.527 "data_size": 63488 00:17:08.527 }, 00:17:08.527 { 00:17:08.527 "name": "pt3", 00:17:08.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.527 "is_configured": true, 00:17:08.527 "data_offset": 2048, 00:17:08.527 "data_size": 63488 00:17:08.527 }, 00:17:08.527 { 00:17:08.527 "name": "pt4", 00:17:08.527 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.527 "is_configured": true, 00:17:08.527 "data_offset": 2048, 00:17:08.527 "data_size": 63488 00:17:08.527 } 00:17:08.527 ] 00:17:08.527 }' 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.527 10:40:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.786 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.786 [2024-11-20 10:40:12.260677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9a61aae1-ab59-4bc9-821a-53b130964313 '!=' 9a61aae1-ab59-4bc9-821a-53b130964313 ']' 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84276 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84276 ']' 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84276 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84276 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84276' 00:17:09.046 killing process with pid 84276 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84276 00:17:09.046 [2024-11-20 10:40:12.315603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.046 [2024-11-20 10:40:12.315706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.046 [2024-11-20 10:40:12.315788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.046 10:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84276 00:17:09.046 [2024-11-20 10:40:12.315802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:09.305 [2024-11-20 10:40:12.686499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.687 10:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:10.687 00:17:10.687 real 0m8.381s 00:17:10.687 user 0m13.163s 00:17:10.687 sys 0m1.451s 00:17:10.687 10:40:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.687 ************************************ 00:17:10.687 END TEST raid5f_superblock_test 00:17:10.687 ************************************ 00:17:10.687 10:40:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.687 10:40:13 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:10.687 10:40:13 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:10.687 10:40:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:10.687 10:40:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.687 10:40:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.687 ************************************ 00:17:10.687 START TEST raid5f_rebuild_test 00:17:10.687 ************************************ 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84756 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84756 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84756 ']' 00:17:10.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.687 10:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.687 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.687 Zero copy mechanism will not be used. 00:17:10.687 [2024-11-20 10:40:13.956831] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:10.687 [2024-11-20 10:40:13.956957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84756 ] 00:17:10.687 [2024-11-20 10:40:14.133601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.946 [2024-11-20 10:40:14.247598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.204 [2024-11-20 10:40:14.441202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.204 [2024-11-20 10:40:14.441260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.462 BaseBdev1_malloc 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.462 [2024-11-20 10:40:14.835628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.462 [2024-11-20 10:40:14.835790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.462 [2024-11-20 10:40:14.835819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.462 [2024-11-20 10:40:14.835831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.462 [2024-11-20 10:40:14.837924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.462 [2024-11-20 10:40:14.837961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.462 BaseBdev1 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.462 BaseBdev2_malloc 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.462 [2024-11-20 10:40:14.891767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.462 [2024-11-20 10:40:14.891832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.462 [2024-11-20 10:40:14.891852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:11.462 [2024-11-20 10:40:14.891862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.462 [2024-11-20 10:40:14.894042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.462 [2024-11-20 10:40:14.894081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.462 BaseBdev2 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.462 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.721 BaseBdev3_malloc 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.721 [2024-11-20 10:40:14.956899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:11.721 [2024-11-20 10:40:14.956961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.721 [2024-11-20 10:40:14.956983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:11.721 [2024-11-20 10:40:14.956994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.721 [2024-11-20 10:40:14.959166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.721 [2024-11-20 10:40:14.959251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:11.721 BaseBdev3 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.721 10:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.721 BaseBdev4_malloc 00:17:11.721 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.721 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:11.721 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.721 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.721 [2024-11-20 10:40:15.009585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:11.721 [2024-11-20 10:40:15.009709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.721 [2024-11-20 10:40:15.009731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:11.722 [2024-11-20 10:40:15.009742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.722 [2024-11-20 10:40:15.011739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.722 [2024-11-20 10:40:15.011781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:11.722 BaseBdev4 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 spare_malloc 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 spare_delay 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 [2024-11-20 10:40:15.077053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.722 [2024-11-20 10:40:15.077119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.722 [2024-11-20 10:40:15.077142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:11.722 [2024-11-20 10:40:15.077154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.722 [2024-11-20 10:40:15.079514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.722 [2024-11-20 10:40:15.079605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.722 spare 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 [2024-11-20 10:40:15.089087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.722 [2024-11-20 10:40:15.091077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.722 [2024-11-20 10:40:15.091151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.722 [2024-11-20 10:40:15.091211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.722 [2024-11-20 10:40:15.091311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:11.722 [2024-11-20 10:40:15.091325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:11.722 [2024-11-20 10:40:15.091633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:11.722 [2024-11-20 10:40:15.100038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:11.722 [2024-11-20 10:40:15.100059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:11.722 [2024-11-20 10:40:15.100268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.722 "name": "raid_bdev1", 00:17:11.722 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:11.722 "strip_size_kb": 64, 00:17:11.722 "state": "online", 00:17:11.722 "raid_level": "raid5f", 00:17:11.722 "superblock": false, 00:17:11.722 "num_base_bdevs": 4, 00:17:11.722 "num_base_bdevs_discovered": 4, 00:17:11.722 "num_base_bdevs_operational": 4, 00:17:11.722 "base_bdevs_list": [ 00:17:11.722 { 00:17:11.722 "name": "BaseBdev1", 00:17:11.722 "uuid": "0ee696f2-f718-5f50-a1fc-f223b15f97d0", 00:17:11.722 "is_configured": true, 00:17:11.722 "data_offset": 0, 00:17:11.722 "data_size": 65536 00:17:11.722 }, 00:17:11.722 { 00:17:11.722 "name": "BaseBdev2", 00:17:11.722 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:11.722 "is_configured": true, 00:17:11.722 "data_offset": 0, 00:17:11.722 "data_size": 65536 00:17:11.722 }, 00:17:11.722 { 00:17:11.722 "name": "BaseBdev3", 00:17:11.722 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:11.722 "is_configured": true, 00:17:11.722 "data_offset": 0, 00:17:11.722 "data_size": 65536 00:17:11.722 }, 00:17:11.722 { 00:17:11.722 "name": "BaseBdev4", 00:17:11.722 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:11.722 "is_configured": true, 00:17:11.722 "data_offset": 0, 00:17:11.722 "data_size": 65536 00:17:11.722 } 00:17:11.722 ] 00:17:11.722 }' 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.722 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.290 [2024-11-20 10:40:15.568691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.290 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:12.549 [2024-11-20 10:40:15.824088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:12.549 /dev/nbd0 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.549 1+0 records in 00:17:12.549 1+0 records out 00:17:12.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044165 s, 9.3 MB/s 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:12.549 10:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:13.121 512+0 records in 00:17:13.121 512+0 records out 00:17:13.121 100663296 bytes (101 MB, 96 MiB) copied, 0.519091 s, 194 MB/s 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.121 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.395 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.395 [2024-11-20 10:40:16.634605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 [2024-11-20 10:40:16.649656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.396 "name": "raid_bdev1", 00:17:13.396 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:13.396 "strip_size_kb": 64, 00:17:13.396 "state": "online", 00:17:13.396 "raid_level": "raid5f", 00:17:13.396 "superblock": false, 00:17:13.396 "num_base_bdevs": 4, 00:17:13.396 "num_base_bdevs_discovered": 3, 00:17:13.396 "num_base_bdevs_operational": 3, 00:17:13.396 "base_bdevs_list": [ 00:17:13.396 { 00:17:13.396 "name": null, 00:17:13.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.396 "is_configured": false, 00:17:13.396 "data_offset": 0, 00:17:13.396 "data_size": 65536 00:17:13.396 }, 00:17:13.396 { 00:17:13.396 "name": "BaseBdev2", 00:17:13.396 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:13.396 "is_configured": true, 00:17:13.396 "data_offset": 0, 00:17:13.396 "data_size": 65536 00:17:13.396 }, 00:17:13.396 { 00:17:13.396 "name": "BaseBdev3", 00:17:13.396 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:13.396 "is_configured": true, 00:17:13.396 "data_offset": 0, 00:17:13.396 "data_size": 65536 00:17:13.396 }, 00:17:13.396 { 00:17:13.396 "name": "BaseBdev4", 00:17:13.396 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:13.396 "is_configured": true, 00:17:13.396 "data_offset": 0, 00:17:13.396 "data_size": 65536 00:17:13.396 } 00:17:13.396 ] 00:17:13.396 }' 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.396 10:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 10:40:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.671 10:40:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.671 10:40:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.671 [2024-11-20 10:40:17.124869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.671 [2024-11-20 10:40:17.142176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:13.671 10:40:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.671 10:40:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.929 [2024-11-20 10:40:17.152700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.863 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.863 "name": "raid_bdev1", 00:17:14.863 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:14.863 "strip_size_kb": 64, 00:17:14.863 "state": "online", 00:17:14.863 "raid_level": "raid5f", 00:17:14.863 "superblock": false, 00:17:14.863 "num_base_bdevs": 4, 00:17:14.863 "num_base_bdevs_discovered": 4, 00:17:14.863 "num_base_bdevs_operational": 4, 00:17:14.863 "process": { 00:17:14.863 "type": "rebuild", 00:17:14.863 "target": "spare", 00:17:14.863 "progress": { 00:17:14.863 "blocks": 19200, 00:17:14.863 "percent": 9 00:17:14.863 } 00:17:14.863 }, 00:17:14.863 "base_bdevs_list": [ 00:17:14.863 { 00:17:14.863 "name": "spare", 00:17:14.863 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:14.863 "is_configured": true, 00:17:14.863 "data_offset": 0, 00:17:14.863 "data_size": 65536 00:17:14.863 }, 00:17:14.863 { 00:17:14.863 "name": "BaseBdev2", 00:17:14.864 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:14.864 "is_configured": true, 00:17:14.864 "data_offset": 0, 00:17:14.864 "data_size": 65536 00:17:14.864 }, 00:17:14.864 { 00:17:14.864 "name": "BaseBdev3", 00:17:14.864 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:14.864 "is_configured": true, 00:17:14.864 "data_offset": 0, 00:17:14.864 "data_size": 65536 00:17:14.864 }, 00:17:14.864 { 00:17:14.864 "name": "BaseBdev4", 00:17:14.864 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:14.864 "is_configured": true, 00:17:14.864 "data_offset": 0, 00:17:14.864 "data_size": 65536 00:17:14.864 } 00:17:14.864 ] 00:17:14.864 }' 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.864 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.864 [2024-11-20 10:40:18.284007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.123 [2024-11-20 10:40:18.360013] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.123 [2024-11-20 10:40:18.360100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.123 [2024-11-20 10:40:18.360120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.123 [2024-11-20 10:40:18.360131] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.123 "name": "raid_bdev1", 00:17:15.123 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:15.123 "strip_size_kb": 64, 00:17:15.123 "state": "online", 00:17:15.123 "raid_level": "raid5f", 00:17:15.123 "superblock": false, 00:17:15.123 "num_base_bdevs": 4, 00:17:15.123 "num_base_bdevs_discovered": 3, 00:17:15.123 "num_base_bdevs_operational": 3, 00:17:15.123 "base_bdevs_list": [ 00:17:15.123 { 00:17:15.123 "name": null, 00:17:15.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.123 "is_configured": false, 00:17:15.123 "data_offset": 0, 00:17:15.123 "data_size": 65536 00:17:15.123 }, 00:17:15.123 { 00:17:15.123 "name": "BaseBdev2", 00:17:15.123 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:15.123 "is_configured": true, 00:17:15.123 "data_offset": 0, 00:17:15.123 "data_size": 65536 00:17:15.123 }, 00:17:15.123 { 00:17:15.123 "name": "BaseBdev3", 00:17:15.123 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:15.123 "is_configured": true, 00:17:15.123 "data_offset": 0, 00:17:15.123 "data_size": 65536 00:17:15.123 }, 00:17:15.123 { 00:17:15.123 "name": "BaseBdev4", 00:17:15.123 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:15.123 "is_configured": true, 00:17:15.123 "data_offset": 0, 00:17:15.123 "data_size": 65536 00:17:15.123 } 00:17:15.123 ] 00:17:15.123 }' 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.123 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.382 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.382 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.382 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.382 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.383 "name": "raid_bdev1", 00:17:15.383 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:15.383 "strip_size_kb": 64, 00:17:15.383 "state": "online", 00:17:15.383 "raid_level": "raid5f", 00:17:15.383 "superblock": false, 00:17:15.383 "num_base_bdevs": 4, 00:17:15.383 "num_base_bdevs_discovered": 3, 00:17:15.383 "num_base_bdevs_operational": 3, 00:17:15.383 "base_bdevs_list": [ 00:17:15.383 { 00:17:15.383 "name": null, 00:17:15.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.383 "is_configured": false, 00:17:15.383 "data_offset": 0, 00:17:15.383 "data_size": 65536 00:17:15.383 }, 00:17:15.383 { 00:17:15.383 "name": "BaseBdev2", 00:17:15.383 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:15.383 "is_configured": true, 00:17:15.383 "data_offset": 0, 00:17:15.383 "data_size": 65536 00:17:15.383 }, 00:17:15.383 { 00:17:15.383 "name": "BaseBdev3", 00:17:15.383 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:15.383 "is_configured": true, 00:17:15.383 "data_offset": 0, 00:17:15.383 "data_size": 65536 00:17:15.383 }, 00:17:15.383 { 00:17:15.383 "name": "BaseBdev4", 00:17:15.383 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:15.383 "is_configured": true, 00:17:15.383 "data_offset": 0, 00:17:15.383 "data_size": 65536 00:17:15.383 } 00:17:15.383 ] 00:17:15.383 }' 00:17:15.383 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.642 [2024-11-20 10:40:18.957314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.642 [2024-11-20 10:40:18.972063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.642 10:40:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:15.642 [2024-11-20 10:40:18.980454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.580 10:40:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.580 10:40:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.580 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.580 "name": "raid_bdev1", 00:17:16.580 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:16.580 "strip_size_kb": 64, 00:17:16.580 "state": "online", 00:17:16.580 "raid_level": "raid5f", 00:17:16.580 "superblock": false, 00:17:16.580 "num_base_bdevs": 4, 00:17:16.580 "num_base_bdevs_discovered": 4, 00:17:16.580 "num_base_bdevs_operational": 4, 00:17:16.580 "process": { 00:17:16.580 "type": "rebuild", 00:17:16.580 "target": "spare", 00:17:16.580 "progress": { 00:17:16.580 "blocks": 19200, 00:17:16.580 "percent": 9 00:17:16.580 } 00:17:16.580 }, 00:17:16.580 "base_bdevs_list": [ 00:17:16.580 { 00:17:16.580 "name": "spare", 00:17:16.580 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:16.580 "is_configured": true, 00:17:16.580 "data_offset": 0, 00:17:16.580 "data_size": 65536 00:17:16.580 }, 00:17:16.580 { 00:17:16.580 "name": "BaseBdev2", 00:17:16.580 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:16.580 "is_configured": true, 00:17:16.580 "data_offset": 0, 00:17:16.580 "data_size": 65536 00:17:16.580 }, 00:17:16.580 { 00:17:16.580 "name": "BaseBdev3", 00:17:16.580 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:16.580 "is_configured": true, 00:17:16.580 "data_offset": 0, 00:17:16.580 "data_size": 65536 00:17:16.580 }, 00:17:16.580 { 00:17:16.580 "name": "BaseBdev4", 00:17:16.580 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:16.580 "is_configured": true, 00:17:16.580 "data_offset": 0, 00:17:16.580 "data_size": 65536 00:17:16.580 } 00:17:16.580 ] 00:17:16.580 }' 00:17:16.580 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.840 "name": "raid_bdev1", 00:17:16.840 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:16.840 "strip_size_kb": 64, 00:17:16.840 "state": "online", 00:17:16.840 "raid_level": "raid5f", 00:17:16.840 "superblock": false, 00:17:16.840 "num_base_bdevs": 4, 00:17:16.840 "num_base_bdevs_discovered": 4, 00:17:16.840 "num_base_bdevs_operational": 4, 00:17:16.840 "process": { 00:17:16.840 "type": "rebuild", 00:17:16.840 "target": "spare", 00:17:16.840 "progress": { 00:17:16.840 "blocks": 21120, 00:17:16.840 "percent": 10 00:17:16.840 } 00:17:16.840 }, 00:17:16.840 "base_bdevs_list": [ 00:17:16.840 { 00:17:16.840 "name": "spare", 00:17:16.840 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:16.840 "is_configured": true, 00:17:16.840 "data_offset": 0, 00:17:16.840 "data_size": 65536 00:17:16.840 }, 00:17:16.840 { 00:17:16.840 "name": "BaseBdev2", 00:17:16.840 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:16.840 "is_configured": true, 00:17:16.840 "data_offset": 0, 00:17:16.840 "data_size": 65536 00:17:16.840 }, 00:17:16.840 { 00:17:16.840 "name": "BaseBdev3", 00:17:16.840 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:16.840 "is_configured": true, 00:17:16.840 "data_offset": 0, 00:17:16.840 "data_size": 65536 00:17:16.840 }, 00:17:16.840 { 00:17:16.840 "name": "BaseBdev4", 00:17:16.840 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:16.840 "is_configured": true, 00:17:16.840 "data_offset": 0, 00:17:16.840 "data_size": 65536 00:17:16.840 } 00:17:16.840 ] 00:17:16.840 }' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.840 10:40:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.220 "name": "raid_bdev1", 00:17:18.220 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:18.220 "strip_size_kb": 64, 00:17:18.220 "state": "online", 00:17:18.220 "raid_level": "raid5f", 00:17:18.220 "superblock": false, 00:17:18.220 "num_base_bdevs": 4, 00:17:18.220 "num_base_bdevs_discovered": 4, 00:17:18.220 "num_base_bdevs_operational": 4, 00:17:18.220 "process": { 00:17:18.220 "type": "rebuild", 00:17:18.220 "target": "spare", 00:17:18.220 "progress": { 00:17:18.220 "blocks": 44160, 00:17:18.220 "percent": 22 00:17:18.220 } 00:17:18.220 }, 00:17:18.220 "base_bdevs_list": [ 00:17:18.220 { 00:17:18.220 "name": "spare", 00:17:18.220 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:18.220 "is_configured": true, 00:17:18.220 "data_offset": 0, 00:17:18.220 "data_size": 65536 00:17:18.220 }, 00:17:18.220 { 00:17:18.220 "name": "BaseBdev2", 00:17:18.220 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:18.220 "is_configured": true, 00:17:18.220 "data_offset": 0, 00:17:18.220 "data_size": 65536 00:17:18.220 }, 00:17:18.220 { 00:17:18.220 "name": "BaseBdev3", 00:17:18.220 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:18.220 "is_configured": true, 00:17:18.220 "data_offset": 0, 00:17:18.220 "data_size": 65536 00:17:18.220 }, 00:17:18.220 { 00:17:18.220 "name": "BaseBdev4", 00:17:18.220 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:18.220 "is_configured": true, 00:17:18.220 "data_offset": 0, 00:17:18.220 "data_size": 65536 00:17:18.220 } 00:17:18.220 ] 00:17:18.220 }' 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.220 10:40:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.155 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.155 "name": "raid_bdev1", 00:17:19.155 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:19.155 "strip_size_kb": 64, 00:17:19.155 "state": "online", 00:17:19.155 "raid_level": "raid5f", 00:17:19.155 "superblock": false, 00:17:19.155 "num_base_bdevs": 4, 00:17:19.155 "num_base_bdevs_discovered": 4, 00:17:19.155 "num_base_bdevs_operational": 4, 00:17:19.155 "process": { 00:17:19.155 "type": "rebuild", 00:17:19.155 "target": "spare", 00:17:19.156 "progress": { 00:17:19.156 "blocks": 65280, 00:17:19.156 "percent": 33 00:17:19.156 } 00:17:19.156 }, 00:17:19.156 "base_bdevs_list": [ 00:17:19.156 { 00:17:19.156 "name": "spare", 00:17:19.156 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:19.156 "is_configured": true, 00:17:19.156 "data_offset": 0, 00:17:19.156 "data_size": 65536 00:17:19.156 }, 00:17:19.156 { 00:17:19.156 "name": "BaseBdev2", 00:17:19.156 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:19.156 "is_configured": true, 00:17:19.156 "data_offset": 0, 00:17:19.156 "data_size": 65536 00:17:19.156 }, 00:17:19.156 { 00:17:19.156 "name": "BaseBdev3", 00:17:19.156 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:19.156 "is_configured": true, 00:17:19.156 "data_offset": 0, 00:17:19.156 "data_size": 65536 00:17:19.156 }, 00:17:19.156 { 00:17:19.156 "name": "BaseBdev4", 00:17:19.156 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:19.156 "is_configured": true, 00:17:19.156 "data_offset": 0, 00:17:19.156 "data_size": 65536 00:17:19.156 } 00:17:19.156 ] 00:17:19.156 }' 00:17:19.156 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.156 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.156 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.156 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.156 10:40:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.092 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.092 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.092 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.092 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.092 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.092 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.351 "name": "raid_bdev1", 00:17:20.351 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:20.351 "strip_size_kb": 64, 00:17:20.351 "state": "online", 00:17:20.351 "raid_level": "raid5f", 00:17:20.351 "superblock": false, 00:17:20.351 "num_base_bdevs": 4, 00:17:20.351 "num_base_bdevs_discovered": 4, 00:17:20.351 "num_base_bdevs_operational": 4, 00:17:20.351 "process": { 00:17:20.351 "type": "rebuild", 00:17:20.351 "target": "spare", 00:17:20.351 "progress": { 00:17:20.351 "blocks": 86400, 00:17:20.351 "percent": 43 00:17:20.351 } 00:17:20.351 }, 00:17:20.351 "base_bdevs_list": [ 00:17:20.351 { 00:17:20.351 "name": "spare", 00:17:20.351 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:20.351 "is_configured": true, 00:17:20.351 "data_offset": 0, 00:17:20.351 "data_size": 65536 00:17:20.351 }, 00:17:20.351 { 00:17:20.351 "name": "BaseBdev2", 00:17:20.351 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:20.351 "is_configured": true, 00:17:20.351 "data_offset": 0, 00:17:20.351 "data_size": 65536 00:17:20.351 }, 00:17:20.351 { 00:17:20.351 "name": "BaseBdev3", 00:17:20.351 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:20.351 "is_configured": true, 00:17:20.351 "data_offset": 0, 00:17:20.351 "data_size": 65536 00:17:20.351 }, 00:17:20.351 { 00:17:20.351 "name": "BaseBdev4", 00:17:20.351 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:20.351 "is_configured": true, 00:17:20.351 "data_offset": 0, 00:17:20.351 "data_size": 65536 00:17:20.351 } 00:17:20.351 ] 00:17:20.351 }' 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.351 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.352 10:40:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.288 "name": "raid_bdev1", 00:17:21.288 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:21.288 "strip_size_kb": 64, 00:17:21.288 "state": "online", 00:17:21.288 "raid_level": "raid5f", 00:17:21.288 "superblock": false, 00:17:21.288 "num_base_bdevs": 4, 00:17:21.288 "num_base_bdevs_discovered": 4, 00:17:21.288 "num_base_bdevs_operational": 4, 00:17:21.288 "process": { 00:17:21.288 "type": "rebuild", 00:17:21.288 "target": "spare", 00:17:21.288 "progress": { 00:17:21.288 "blocks": 107520, 00:17:21.288 "percent": 54 00:17:21.288 } 00:17:21.288 }, 00:17:21.288 "base_bdevs_list": [ 00:17:21.288 { 00:17:21.288 "name": "spare", 00:17:21.288 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:21.288 "is_configured": true, 00:17:21.288 "data_offset": 0, 00:17:21.288 "data_size": 65536 00:17:21.288 }, 00:17:21.288 { 00:17:21.288 "name": "BaseBdev2", 00:17:21.288 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:21.288 "is_configured": true, 00:17:21.288 "data_offset": 0, 00:17:21.288 "data_size": 65536 00:17:21.288 }, 00:17:21.288 { 00:17:21.288 "name": "BaseBdev3", 00:17:21.288 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:21.288 "is_configured": true, 00:17:21.288 "data_offset": 0, 00:17:21.288 "data_size": 65536 00:17:21.288 }, 00:17:21.288 { 00:17:21.288 "name": "BaseBdev4", 00:17:21.288 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:21.288 "is_configured": true, 00:17:21.288 "data_offset": 0, 00:17:21.288 "data_size": 65536 00:17:21.288 } 00:17:21.288 ] 00:17:21.288 }' 00:17:21.288 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.547 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.547 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.547 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.547 10:40:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.503 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.503 "name": "raid_bdev1", 00:17:22.503 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:22.503 "strip_size_kb": 64, 00:17:22.503 "state": "online", 00:17:22.503 "raid_level": "raid5f", 00:17:22.503 "superblock": false, 00:17:22.503 "num_base_bdevs": 4, 00:17:22.503 "num_base_bdevs_discovered": 4, 00:17:22.503 "num_base_bdevs_operational": 4, 00:17:22.503 "process": { 00:17:22.503 "type": "rebuild", 00:17:22.503 "target": "spare", 00:17:22.503 "progress": { 00:17:22.503 "blocks": 130560, 00:17:22.503 "percent": 66 00:17:22.503 } 00:17:22.503 }, 00:17:22.503 "base_bdevs_list": [ 00:17:22.503 { 00:17:22.503 "name": "spare", 00:17:22.503 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:22.503 "is_configured": true, 00:17:22.503 "data_offset": 0, 00:17:22.503 "data_size": 65536 00:17:22.504 }, 00:17:22.504 { 00:17:22.504 "name": "BaseBdev2", 00:17:22.504 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:22.504 "is_configured": true, 00:17:22.504 "data_offset": 0, 00:17:22.504 "data_size": 65536 00:17:22.504 }, 00:17:22.504 { 00:17:22.504 "name": "BaseBdev3", 00:17:22.504 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:22.504 "is_configured": true, 00:17:22.504 "data_offset": 0, 00:17:22.504 "data_size": 65536 00:17:22.504 }, 00:17:22.504 { 00:17:22.504 "name": "BaseBdev4", 00:17:22.504 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:22.504 "is_configured": true, 00:17:22.504 "data_offset": 0, 00:17:22.504 "data_size": 65536 00:17:22.504 } 00:17:22.504 ] 00:17:22.504 }' 00:17:22.504 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.504 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.504 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.762 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.762 10:40:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.698 10:40:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.698 10:40:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.698 10:40:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.698 10:40:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.698 "name": "raid_bdev1", 00:17:23.698 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:23.698 "strip_size_kb": 64, 00:17:23.698 "state": "online", 00:17:23.698 "raid_level": "raid5f", 00:17:23.698 "superblock": false, 00:17:23.698 "num_base_bdevs": 4, 00:17:23.698 "num_base_bdevs_discovered": 4, 00:17:23.698 "num_base_bdevs_operational": 4, 00:17:23.698 "process": { 00:17:23.698 "type": "rebuild", 00:17:23.698 "target": "spare", 00:17:23.698 "progress": { 00:17:23.698 "blocks": 151680, 00:17:23.698 "percent": 77 00:17:23.698 } 00:17:23.698 }, 00:17:23.698 "base_bdevs_list": [ 00:17:23.698 { 00:17:23.698 "name": "spare", 00:17:23.698 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:23.698 "is_configured": true, 00:17:23.698 "data_offset": 0, 00:17:23.698 "data_size": 65536 00:17:23.698 }, 00:17:23.698 { 00:17:23.698 "name": "BaseBdev2", 00:17:23.698 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:23.698 "is_configured": true, 00:17:23.698 "data_offset": 0, 00:17:23.698 "data_size": 65536 00:17:23.698 }, 00:17:23.698 { 00:17:23.698 "name": "BaseBdev3", 00:17:23.698 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:23.698 "is_configured": true, 00:17:23.698 "data_offset": 0, 00:17:23.698 "data_size": 65536 00:17:23.698 }, 00:17:23.698 { 00:17:23.698 "name": "BaseBdev4", 00:17:23.698 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:23.698 "is_configured": true, 00:17:23.698 "data_offset": 0, 00:17:23.698 "data_size": 65536 00:17:23.698 } 00:17:23.698 ] 00:17:23.698 }' 00:17:23.698 10:40:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.698 10:40:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.699 10:40:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.699 10:40:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.699 10:40:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.076 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.076 "name": "raid_bdev1", 00:17:25.076 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:25.076 "strip_size_kb": 64, 00:17:25.076 "state": "online", 00:17:25.076 "raid_level": "raid5f", 00:17:25.076 "superblock": false, 00:17:25.076 "num_base_bdevs": 4, 00:17:25.076 "num_base_bdevs_discovered": 4, 00:17:25.076 "num_base_bdevs_operational": 4, 00:17:25.076 "process": { 00:17:25.076 "type": "rebuild", 00:17:25.076 "target": "spare", 00:17:25.076 "progress": { 00:17:25.077 "blocks": 174720, 00:17:25.077 "percent": 88 00:17:25.077 } 00:17:25.077 }, 00:17:25.077 "base_bdevs_list": [ 00:17:25.077 { 00:17:25.077 "name": "spare", 00:17:25.077 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:25.077 "is_configured": true, 00:17:25.077 "data_offset": 0, 00:17:25.077 "data_size": 65536 00:17:25.077 }, 00:17:25.077 { 00:17:25.077 "name": "BaseBdev2", 00:17:25.077 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:25.077 "is_configured": true, 00:17:25.077 "data_offset": 0, 00:17:25.077 "data_size": 65536 00:17:25.077 }, 00:17:25.077 { 00:17:25.077 "name": "BaseBdev3", 00:17:25.077 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:25.077 "is_configured": true, 00:17:25.077 "data_offset": 0, 00:17:25.077 "data_size": 65536 00:17:25.077 }, 00:17:25.077 { 00:17:25.077 "name": "BaseBdev4", 00:17:25.077 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:25.077 "is_configured": true, 00:17:25.077 "data_offset": 0, 00:17:25.077 "data_size": 65536 00:17:25.077 } 00:17:25.077 ] 00:17:25.077 }' 00:17:25.077 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.077 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.077 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.077 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.077 10:40:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.012 [2024-11-20 10:40:29.325077] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:26.012 [2024-11-20 10:40:29.325157] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:26.012 [2024-11-20 10:40:29.325195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.012 "name": "raid_bdev1", 00:17:26.012 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:26.012 "strip_size_kb": 64, 00:17:26.012 "state": "online", 00:17:26.012 "raid_level": "raid5f", 00:17:26.012 "superblock": false, 00:17:26.012 "num_base_bdevs": 4, 00:17:26.012 "num_base_bdevs_discovered": 4, 00:17:26.012 "num_base_bdevs_operational": 4, 00:17:26.012 "process": { 00:17:26.012 "type": "rebuild", 00:17:26.012 "target": "spare", 00:17:26.012 "progress": { 00:17:26.012 "blocks": 195840, 00:17:26.012 "percent": 99 00:17:26.012 } 00:17:26.012 }, 00:17:26.012 "base_bdevs_list": [ 00:17:26.012 { 00:17:26.012 "name": "spare", 00:17:26.012 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:26.012 "is_configured": true, 00:17:26.012 "data_offset": 0, 00:17:26.012 "data_size": 65536 00:17:26.012 }, 00:17:26.012 { 00:17:26.012 "name": "BaseBdev2", 00:17:26.012 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:26.012 "is_configured": true, 00:17:26.012 "data_offset": 0, 00:17:26.012 "data_size": 65536 00:17:26.012 }, 00:17:26.012 { 00:17:26.012 "name": "BaseBdev3", 00:17:26.012 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:26.012 "is_configured": true, 00:17:26.012 "data_offset": 0, 00:17:26.012 "data_size": 65536 00:17:26.012 }, 00:17:26.012 { 00:17:26.012 "name": "BaseBdev4", 00:17:26.012 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:26.012 "is_configured": true, 00:17:26.012 "data_offset": 0, 00:17:26.012 "data_size": 65536 00:17:26.012 } 00:17:26.012 ] 00:17:26.012 }' 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.012 10:40:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.397 "name": "raid_bdev1", 00:17:27.397 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:27.397 "strip_size_kb": 64, 00:17:27.397 "state": "online", 00:17:27.397 "raid_level": "raid5f", 00:17:27.397 "superblock": false, 00:17:27.397 "num_base_bdevs": 4, 00:17:27.397 "num_base_bdevs_discovered": 4, 00:17:27.397 "num_base_bdevs_operational": 4, 00:17:27.397 "base_bdevs_list": [ 00:17:27.397 { 00:17:27.397 "name": "spare", 00:17:27.397 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:27.397 "is_configured": true, 00:17:27.397 "data_offset": 0, 00:17:27.397 "data_size": 65536 00:17:27.397 }, 00:17:27.397 { 00:17:27.397 "name": "BaseBdev2", 00:17:27.397 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:27.397 "is_configured": true, 00:17:27.397 "data_offset": 0, 00:17:27.397 "data_size": 65536 00:17:27.397 }, 00:17:27.397 { 00:17:27.397 "name": "BaseBdev3", 00:17:27.397 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:27.397 "is_configured": true, 00:17:27.397 "data_offset": 0, 00:17:27.397 "data_size": 65536 00:17:27.397 }, 00:17:27.397 { 00:17:27.397 "name": "BaseBdev4", 00:17:27.397 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:27.397 "is_configured": true, 00:17:27.397 "data_offset": 0, 00:17:27.397 "data_size": 65536 00:17:27.397 } 00:17:27.397 ] 00:17:27.397 }' 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:27.397 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.398 "name": "raid_bdev1", 00:17:27.398 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:27.398 "strip_size_kb": 64, 00:17:27.398 "state": "online", 00:17:27.398 "raid_level": "raid5f", 00:17:27.398 "superblock": false, 00:17:27.398 "num_base_bdevs": 4, 00:17:27.398 "num_base_bdevs_discovered": 4, 00:17:27.398 "num_base_bdevs_operational": 4, 00:17:27.398 "base_bdevs_list": [ 00:17:27.398 { 00:17:27.398 "name": "spare", 00:17:27.398 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 }, 00:17:27.398 { 00:17:27.398 "name": "BaseBdev2", 00:17:27.398 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 }, 00:17:27.398 { 00:17:27.398 "name": "BaseBdev3", 00:17:27.398 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 }, 00:17:27.398 { 00:17:27.398 "name": "BaseBdev4", 00:17:27.398 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 } 00:17:27.398 ] 00:17:27.398 }' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.398 "name": "raid_bdev1", 00:17:27.398 "uuid": "06286347-fb8d-4b13-bdf9-ab51de42e808", 00:17:27.398 "strip_size_kb": 64, 00:17:27.398 "state": "online", 00:17:27.398 "raid_level": "raid5f", 00:17:27.398 "superblock": false, 00:17:27.398 "num_base_bdevs": 4, 00:17:27.398 "num_base_bdevs_discovered": 4, 00:17:27.398 "num_base_bdevs_operational": 4, 00:17:27.398 "base_bdevs_list": [ 00:17:27.398 { 00:17:27.398 "name": "spare", 00:17:27.398 "uuid": "7bd0979a-4117-5141-8587-f9831bbbe1d2", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 }, 00:17:27.398 { 00:17:27.398 "name": "BaseBdev2", 00:17:27.398 "uuid": "f0e0889b-11da-5fc8-8353-5629a75ec40f", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 }, 00:17:27.398 { 00:17:27.398 "name": "BaseBdev3", 00:17:27.398 "uuid": "c917b057-3f7a-574b-b833-2bc30701ec37", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 }, 00:17:27.398 { 00:17:27.398 "name": "BaseBdev4", 00:17:27.398 "uuid": "e47b56d9-811c-5835-8a7e-87d62b163270", 00:17:27.398 "is_configured": true, 00:17:27.398 "data_offset": 0, 00:17:27.398 "data_size": 65536 00:17:27.398 } 00:17:27.398 ] 00:17:27.398 }' 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.398 10:40:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.968 [2024-11-20 10:40:31.144365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.968 [2024-11-20 10:40:31.144456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.968 [2024-11-20 10:40:31.144558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.968 [2024-11-20 10:40:31.144668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.968 [2024-11-20 10:40:31.144715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:27.968 /dev/nbd0 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.968 1+0 records in 00:17:27.968 1+0 records out 00:17:27.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463616 s, 8.8 MB/s 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:27.968 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:28.228 /dev/nbd1 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.228 1+0 records in 00:17:28.228 1+0 records out 00:17:28.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374629 s, 10.9 MB/s 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:28.228 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.487 10:40:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.747 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84756 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84756 ']' 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84756 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84756 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84756' 00:17:29.006 killing process with pid 84756 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84756 00:17:29.006 Received shutdown signal, test time was about 60.000000 seconds 00:17:29.006 00:17:29.006 Latency(us) 00:17:29.006 [2024-11-20T10:40:32.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.006 [2024-11-20T10:40:32.485Z] =================================================================================================================== 00:17:29.006 [2024-11-20T10:40:32.485Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.006 [2024-11-20 10:40:32.363003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.006 10:40:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84756 00:17:29.573 [2024-11-20 10:40:32.819830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:30.509 00:17:30.509 real 0m19.984s 00:17:30.509 user 0m23.908s 00:17:30.509 sys 0m2.239s 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.509 ************************************ 00:17:30.509 END TEST raid5f_rebuild_test 00:17:30.509 ************************************ 00:17:30.509 10:40:33 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:30.509 10:40:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:30.509 10:40:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.509 10:40:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.509 ************************************ 00:17:30.509 START TEST raid5f_rebuild_test_sb 00:17:30.509 ************************************ 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85272 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85272 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85272 ']' 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.509 10:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.768 [2024-11-20 10:40:34.013572] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:30.769 [2024-11-20 10:40:34.013780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:30.769 Zero copy mechanism will not be used. 00:17:30.769 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85272 ] 00:17:30.769 [2024-11-20 10:40:34.184589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.029 [2024-11-20 10:40:34.289528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.029 [2024-11-20 10:40:34.489600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.029 [2024-11-20 10:40:34.489693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 BaseBdev1_malloc 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 [2024-11-20 10:40:34.874176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:31.599 [2024-11-20 10:40:34.874241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.599 [2024-11-20 10:40:34.874265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:31.599 [2024-11-20 10:40:34.874274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.599 [2024-11-20 10:40:34.876291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.599 [2024-11-20 10:40:34.876331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.599 BaseBdev1 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 BaseBdev2_malloc 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 [2024-11-20 10:40:34.925666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:31.599 [2024-11-20 10:40:34.925719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.599 [2024-11-20 10:40:34.925736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.599 [2024-11-20 10:40:34.925747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.599 [2024-11-20 10:40:34.927683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.599 [2024-11-20 10:40:34.927799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:31.599 BaseBdev2 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 BaseBdev3_malloc 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 [2024-11-20 10:40:35.011304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:31.599 [2024-11-20 10:40:35.011376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.599 [2024-11-20 10:40:35.011414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:31.599 [2024-11-20 10:40:35.011425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.599 [2024-11-20 10:40:35.013366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.599 [2024-11-20 10:40:35.013414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:31.599 BaseBdev3 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 BaseBdev4_malloc 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 [2024-11-20 10:40:35.064299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:31.599 [2024-11-20 10:40:35.064351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.599 [2024-11-20 10:40:35.064387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:31.599 [2024-11-20 10:40:35.064397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.599 [2024-11-20 10:40:35.066362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.599 [2024-11-20 10:40:35.066409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:31.599 BaseBdev4 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.859 spare_malloc 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.859 spare_delay 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.859 [2024-11-20 10:40:35.128892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:31.859 [2024-11-20 10:40:35.128944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.859 [2024-11-20 10:40:35.128961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:31.859 [2024-11-20 10:40:35.128970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.859 [2024-11-20 10:40:35.130932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.859 [2024-11-20 10:40:35.130972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:31.859 spare 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.859 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.859 [2024-11-20 10:40:35.140942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.859 [2024-11-20 10:40:35.142757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.859 [2024-11-20 10:40:35.142816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.859 [2024-11-20 10:40:35.142863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:31.859 [2024-11-20 10:40:35.143032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:31.859 [2024-11-20 10:40:35.143049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:31.860 [2024-11-20 10:40:35.143262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.860 [2024-11-20 10:40:35.150308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:31.860 [2024-11-20 10:40:35.150328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:31.860 [2024-11-20 10:40:35.150512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.860 "name": "raid_bdev1", 00:17:31.860 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:31.860 "strip_size_kb": 64, 00:17:31.860 "state": "online", 00:17:31.860 "raid_level": "raid5f", 00:17:31.860 "superblock": true, 00:17:31.860 "num_base_bdevs": 4, 00:17:31.860 "num_base_bdevs_discovered": 4, 00:17:31.860 "num_base_bdevs_operational": 4, 00:17:31.860 "base_bdevs_list": [ 00:17:31.860 { 00:17:31.860 "name": "BaseBdev1", 00:17:31.860 "uuid": "76b7c6a4-40c1-5d3d-b051-4ceaa6bc45d7", 00:17:31.860 "is_configured": true, 00:17:31.860 "data_offset": 2048, 00:17:31.860 "data_size": 63488 00:17:31.860 }, 00:17:31.860 { 00:17:31.860 "name": "BaseBdev2", 00:17:31.860 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:31.860 "is_configured": true, 00:17:31.860 "data_offset": 2048, 00:17:31.860 "data_size": 63488 00:17:31.860 }, 00:17:31.860 { 00:17:31.860 "name": "BaseBdev3", 00:17:31.860 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:31.860 "is_configured": true, 00:17:31.860 "data_offset": 2048, 00:17:31.860 "data_size": 63488 00:17:31.860 }, 00:17:31.860 { 00:17:31.860 "name": "BaseBdev4", 00:17:31.860 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:31.860 "is_configured": true, 00:17:31.860 "data_offset": 2048, 00:17:31.860 "data_size": 63488 00:17:31.860 } 00:17:31.860 ] 00:17:31.860 }' 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.860 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.119 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.119 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:32.119 [2024-11-20 10:40:35.573871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.119 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.379 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:32.379 [2024-11-20 10:40:35.825255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:32.379 /dev/nbd0 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.639 1+0 records in 00:17:32.639 1+0 records out 00:17:32.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275063 s, 14.9 MB/s 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:32.639 10:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:32.899 496+0 records in 00:17:32.899 496+0 records out 00:17:32.899 97517568 bytes (98 MB, 93 MiB) copied, 0.445595 s, 219 MB/s 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.899 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.159 [2024-11-20 10:40:36.557513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.159 [2024-11-20 10:40:36.571029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.159 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.160 "name": "raid_bdev1", 00:17:33.160 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:33.160 "strip_size_kb": 64, 00:17:33.160 "state": "online", 00:17:33.160 "raid_level": "raid5f", 00:17:33.160 "superblock": true, 00:17:33.160 "num_base_bdevs": 4, 00:17:33.160 "num_base_bdevs_discovered": 3, 00:17:33.160 "num_base_bdevs_operational": 3, 00:17:33.160 "base_bdevs_list": [ 00:17:33.160 { 00:17:33.160 "name": null, 00:17:33.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.160 "is_configured": false, 00:17:33.160 "data_offset": 0, 00:17:33.160 "data_size": 63488 00:17:33.160 }, 00:17:33.160 { 00:17:33.160 "name": "BaseBdev2", 00:17:33.160 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:33.160 "is_configured": true, 00:17:33.160 "data_offset": 2048, 00:17:33.160 "data_size": 63488 00:17:33.160 }, 00:17:33.160 { 00:17:33.160 "name": "BaseBdev3", 00:17:33.160 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:33.160 "is_configured": true, 00:17:33.160 "data_offset": 2048, 00:17:33.160 "data_size": 63488 00:17:33.160 }, 00:17:33.160 { 00:17:33.160 "name": "BaseBdev4", 00:17:33.160 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:33.160 "is_configured": true, 00:17:33.160 "data_offset": 2048, 00:17:33.160 "data_size": 63488 00:17:33.160 } 00:17:33.160 ] 00:17:33.160 }' 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.160 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.729 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.729 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.729 10:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.729 [2024-11-20 10:40:36.990314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.729 [2024-11-20 10:40:37.004656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:33.729 10:40:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.729 10:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:33.729 [2024-11-20 10:40:37.013568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.667 "name": "raid_bdev1", 00:17:34.667 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:34.667 "strip_size_kb": 64, 00:17:34.667 "state": "online", 00:17:34.667 "raid_level": "raid5f", 00:17:34.667 "superblock": true, 00:17:34.667 "num_base_bdevs": 4, 00:17:34.667 "num_base_bdevs_discovered": 4, 00:17:34.667 "num_base_bdevs_operational": 4, 00:17:34.667 "process": { 00:17:34.667 "type": "rebuild", 00:17:34.667 "target": "spare", 00:17:34.667 "progress": { 00:17:34.667 "blocks": 19200, 00:17:34.667 "percent": 10 00:17:34.667 } 00:17:34.667 }, 00:17:34.667 "base_bdevs_list": [ 00:17:34.667 { 00:17:34.667 "name": "spare", 00:17:34.667 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 }, 00:17:34.667 { 00:17:34.667 "name": "BaseBdev2", 00:17:34.667 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 }, 00:17:34.667 { 00:17:34.667 "name": "BaseBdev3", 00:17:34.667 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 }, 00:17:34.667 { 00:17:34.667 "name": "BaseBdev4", 00:17:34.667 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 } 00:17:34.667 ] 00:17:34.667 }' 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.667 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.928 [2024-11-20 10:40:38.144402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.928 [2024-11-20 10:40:38.219317] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:34.928 [2024-11-20 10:40:38.219460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.928 [2024-11-20 10:40:38.219483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.928 [2024-11-20 10:40:38.219493] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.928 "name": "raid_bdev1", 00:17:34.928 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:34.928 "strip_size_kb": 64, 00:17:34.928 "state": "online", 00:17:34.928 "raid_level": "raid5f", 00:17:34.928 "superblock": true, 00:17:34.928 "num_base_bdevs": 4, 00:17:34.928 "num_base_bdevs_discovered": 3, 00:17:34.928 "num_base_bdevs_operational": 3, 00:17:34.928 "base_bdevs_list": [ 00:17:34.928 { 00:17:34.928 "name": null, 00:17:34.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.928 "is_configured": false, 00:17:34.928 "data_offset": 0, 00:17:34.928 "data_size": 63488 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "name": "BaseBdev2", 00:17:34.928 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "name": "BaseBdev3", 00:17:34.928 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "name": "BaseBdev4", 00:17:34.928 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 } 00:17:34.928 ] 00:17:34.928 }' 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.928 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.502 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.502 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.502 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.502 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.502 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.503 "name": "raid_bdev1", 00:17:35.503 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:35.503 "strip_size_kb": 64, 00:17:35.503 "state": "online", 00:17:35.503 "raid_level": "raid5f", 00:17:35.503 "superblock": true, 00:17:35.503 "num_base_bdevs": 4, 00:17:35.503 "num_base_bdevs_discovered": 3, 00:17:35.503 "num_base_bdevs_operational": 3, 00:17:35.503 "base_bdevs_list": [ 00:17:35.503 { 00:17:35.503 "name": null, 00:17:35.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.503 "is_configured": false, 00:17:35.503 "data_offset": 0, 00:17:35.503 "data_size": 63488 00:17:35.503 }, 00:17:35.503 { 00:17:35.503 "name": "BaseBdev2", 00:17:35.503 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:35.503 "is_configured": true, 00:17:35.503 "data_offset": 2048, 00:17:35.503 "data_size": 63488 00:17:35.503 }, 00:17:35.503 { 00:17:35.503 "name": "BaseBdev3", 00:17:35.503 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:35.503 "is_configured": true, 00:17:35.503 "data_offset": 2048, 00:17:35.503 "data_size": 63488 00:17:35.503 }, 00:17:35.503 { 00:17:35.503 "name": "BaseBdev4", 00:17:35.503 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:35.503 "is_configured": true, 00:17:35.503 "data_offset": 2048, 00:17:35.503 "data_size": 63488 00:17:35.503 } 00:17:35.503 ] 00:17:35.503 }' 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.503 [2024-11-20 10:40:38.860418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.503 [2024-11-20 10:40:38.874577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.503 10:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:35.503 [2024-11-20 10:40:38.884033] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.442 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.701 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.701 "name": "raid_bdev1", 00:17:36.701 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:36.701 "strip_size_kb": 64, 00:17:36.701 "state": "online", 00:17:36.701 "raid_level": "raid5f", 00:17:36.701 "superblock": true, 00:17:36.701 "num_base_bdevs": 4, 00:17:36.701 "num_base_bdevs_discovered": 4, 00:17:36.701 "num_base_bdevs_operational": 4, 00:17:36.701 "process": { 00:17:36.701 "type": "rebuild", 00:17:36.701 "target": "spare", 00:17:36.701 "progress": { 00:17:36.701 "blocks": 19200, 00:17:36.701 "percent": 10 00:17:36.701 } 00:17:36.701 }, 00:17:36.701 "base_bdevs_list": [ 00:17:36.701 { 00:17:36.701 "name": "spare", 00:17:36.701 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev2", 00:17:36.701 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev3", 00:17:36.701 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev4", 00:17:36.701 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 } 00:17:36.701 ] 00:17:36.701 }' 00:17:36.701 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.701 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.701 10:40:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:36.701 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=645 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.701 "name": "raid_bdev1", 00:17:36.701 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:36.701 "strip_size_kb": 64, 00:17:36.701 "state": "online", 00:17:36.701 "raid_level": "raid5f", 00:17:36.701 "superblock": true, 00:17:36.701 "num_base_bdevs": 4, 00:17:36.701 "num_base_bdevs_discovered": 4, 00:17:36.701 "num_base_bdevs_operational": 4, 00:17:36.701 "process": { 00:17:36.701 "type": "rebuild", 00:17:36.701 "target": "spare", 00:17:36.701 "progress": { 00:17:36.701 "blocks": 21120, 00:17:36.701 "percent": 11 00:17:36.701 } 00:17:36.701 }, 00:17:36.701 "base_bdevs_list": [ 00:17:36.701 { 00:17:36.701 "name": "spare", 00:17:36.701 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev2", 00:17:36.701 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev3", 00:17:36.701 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev4", 00:17:36.701 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.701 "data_size": 63488 00:17:36.701 } 00:17:36.701 ] 00:17:36.701 }' 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.701 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.960 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.960 10:40:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.898 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.899 "name": "raid_bdev1", 00:17:37.899 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:37.899 "strip_size_kb": 64, 00:17:37.899 "state": "online", 00:17:37.899 "raid_level": "raid5f", 00:17:37.899 "superblock": true, 00:17:37.899 "num_base_bdevs": 4, 00:17:37.899 "num_base_bdevs_discovered": 4, 00:17:37.899 "num_base_bdevs_operational": 4, 00:17:37.899 "process": { 00:17:37.899 "type": "rebuild", 00:17:37.899 "target": "spare", 00:17:37.899 "progress": { 00:17:37.899 "blocks": 44160, 00:17:37.899 "percent": 23 00:17:37.899 } 00:17:37.899 }, 00:17:37.899 "base_bdevs_list": [ 00:17:37.899 { 00:17:37.899 "name": "spare", 00:17:37.899 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:37.899 "is_configured": true, 00:17:37.899 "data_offset": 2048, 00:17:37.899 "data_size": 63488 00:17:37.899 }, 00:17:37.899 { 00:17:37.899 "name": "BaseBdev2", 00:17:37.899 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:37.899 "is_configured": true, 00:17:37.899 "data_offset": 2048, 00:17:37.899 "data_size": 63488 00:17:37.899 }, 00:17:37.899 { 00:17:37.899 "name": "BaseBdev3", 00:17:37.899 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:37.899 "is_configured": true, 00:17:37.899 "data_offset": 2048, 00:17:37.899 "data_size": 63488 00:17:37.899 }, 00:17:37.899 { 00:17:37.899 "name": "BaseBdev4", 00:17:37.899 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:37.899 "is_configured": true, 00:17:37.899 "data_offset": 2048, 00:17:37.899 "data_size": 63488 00:17:37.899 } 00:17:37.899 ] 00:17:37.899 }' 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.899 10:40:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.281 "name": "raid_bdev1", 00:17:39.281 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:39.281 "strip_size_kb": 64, 00:17:39.281 "state": "online", 00:17:39.281 "raid_level": "raid5f", 00:17:39.281 "superblock": true, 00:17:39.281 "num_base_bdevs": 4, 00:17:39.281 "num_base_bdevs_discovered": 4, 00:17:39.281 "num_base_bdevs_operational": 4, 00:17:39.281 "process": { 00:17:39.281 "type": "rebuild", 00:17:39.281 "target": "spare", 00:17:39.281 "progress": { 00:17:39.281 "blocks": 65280, 00:17:39.281 "percent": 34 00:17:39.281 } 00:17:39.281 }, 00:17:39.281 "base_bdevs_list": [ 00:17:39.281 { 00:17:39.281 "name": "spare", 00:17:39.281 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:39.281 "is_configured": true, 00:17:39.281 "data_offset": 2048, 00:17:39.281 "data_size": 63488 00:17:39.281 }, 00:17:39.281 { 00:17:39.281 "name": "BaseBdev2", 00:17:39.281 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:39.281 "is_configured": true, 00:17:39.281 "data_offset": 2048, 00:17:39.281 "data_size": 63488 00:17:39.281 }, 00:17:39.281 { 00:17:39.281 "name": "BaseBdev3", 00:17:39.281 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:39.281 "is_configured": true, 00:17:39.281 "data_offset": 2048, 00:17:39.281 "data_size": 63488 00:17:39.281 }, 00:17:39.281 { 00:17:39.281 "name": "BaseBdev4", 00:17:39.281 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:39.281 "is_configured": true, 00:17:39.281 "data_offset": 2048, 00:17:39.281 "data_size": 63488 00:17:39.281 } 00:17:39.281 ] 00:17:39.281 }' 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.281 10:40:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.223 "name": "raid_bdev1", 00:17:40.223 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:40.223 "strip_size_kb": 64, 00:17:40.223 "state": "online", 00:17:40.223 "raid_level": "raid5f", 00:17:40.223 "superblock": true, 00:17:40.223 "num_base_bdevs": 4, 00:17:40.223 "num_base_bdevs_discovered": 4, 00:17:40.223 "num_base_bdevs_operational": 4, 00:17:40.223 "process": { 00:17:40.223 "type": "rebuild", 00:17:40.223 "target": "spare", 00:17:40.223 "progress": { 00:17:40.223 "blocks": 86400, 00:17:40.223 "percent": 45 00:17:40.223 } 00:17:40.223 }, 00:17:40.223 "base_bdevs_list": [ 00:17:40.223 { 00:17:40.223 "name": "spare", 00:17:40.223 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:40.223 "is_configured": true, 00:17:40.223 "data_offset": 2048, 00:17:40.223 "data_size": 63488 00:17:40.223 }, 00:17:40.223 { 00:17:40.223 "name": "BaseBdev2", 00:17:40.223 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:40.223 "is_configured": true, 00:17:40.223 "data_offset": 2048, 00:17:40.223 "data_size": 63488 00:17:40.223 }, 00:17:40.223 { 00:17:40.223 "name": "BaseBdev3", 00:17:40.223 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:40.223 "is_configured": true, 00:17:40.223 "data_offset": 2048, 00:17:40.223 "data_size": 63488 00:17:40.223 }, 00:17:40.223 { 00:17:40.223 "name": "BaseBdev4", 00:17:40.223 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:40.223 "is_configured": true, 00:17:40.223 "data_offset": 2048, 00:17:40.223 "data_size": 63488 00:17:40.223 } 00:17:40.223 ] 00:17:40.223 }' 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.223 10:40:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.164 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.424 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.424 "name": "raid_bdev1", 00:17:41.424 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:41.424 "strip_size_kb": 64, 00:17:41.424 "state": "online", 00:17:41.424 "raid_level": "raid5f", 00:17:41.424 "superblock": true, 00:17:41.424 "num_base_bdevs": 4, 00:17:41.424 "num_base_bdevs_discovered": 4, 00:17:41.424 "num_base_bdevs_operational": 4, 00:17:41.424 "process": { 00:17:41.424 "type": "rebuild", 00:17:41.424 "target": "spare", 00:17:41.424 "progress": { 00:17:41.424 "blocks": 109440, 00:17:41.424 "percent": 57 00:17:41.424 } 00:17:41.424 }, 00:17:41.424 "base_bdevs_list": [ 00:17:41.424 { 00:17:41.424 "name": "spare", 00:17:41.424 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:41.424 "is_configured": true, 00:17:41.424 "data_offset": 2048, 00:17:41.424 "data_size": 63488 00:17:41.424 }, 00:17:41.424 { 00:17:41.424 "name": "BaseBdev2", 00:17:41.424 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:41.424 "is_configured": true, 00:17:41.424 "data_offset": 2048, 00:17:41.424 "data_size": 63488 00:17:41.424 }, 00:17:41.424 { 00:17:41.424 "name": "BaseBdev3", 00:17:41.424 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:41.424 "is_configured": true, 00:17:41.424 "data_offset": 2048, 00:17:41.424 "data_size": 63488 00:17:41.424 }, 00:17:41.424 { 00:17:41.424 "name": "BaseBdev4", 00:17:41.424 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:41.424 "is_configured": true, 00:17:41.424 "data_offset": 2048, 00:17:41.424 "data_size": 63488 00:17:41.424 } 00:17:41.424 ] 00:17:41.424 }' 00:17:41.424 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.424 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.424 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.424 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.424 10:40:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.366 "name": "raid_bdev1", 00:17:42.366 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:42.366 "strip_size_kb": 64, 00:17:42.366 "state": "online", 00:17:42.366 "raid_level": "raid5f", 00:17:42.366 "superblock": true, 00:17:42.366 "num_base_bdevs": 4, 00:17:42.366 "num_base_bdevs_discovered": 4, 00:17:42.366 "num_base_bdevs_operational": 4, 00:17:42.366 "process": { 00:17:42.366 "type": "rebuild", 00:17:42.366 "target": "spare", 00:17:42.366 "progress": { 00:17:42.366 "blocks": 130560, 00:17:42.366 "percent": 68 00:17:42.366 } 00:17:42.366 }, 00:17:42.366 "base_bdevs_list": [ 00:17:42.366 { 00:17:42.366 "name": "spare", 00:17:42.366 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:42.366 "is_configured": true, 00:17:42.366 "data_offset": 2048, 00:17:42.366 "data_size": 63488 00:17:42.366 }, 00:17:42.366 { 00:17:42.366 "name": "BaseBdev2", 00:17:42.366 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:42.366 "is_configured": true, 00:17:42.366 "data_offset": 2048, 00:17:42.366 "data_size": 63488 00:17:42.366 }, 00:17:42.366 { 00:17:42.366 "name": "BaseBdev3", 00:17:42.366 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:42.366 "is_configured": true, 00:17:42.366 "data_offset": 2048, 00:17:42.366 "data_size": 63488 00:17:42.366 }, 00:17:42.366 { 00:17:42.366 "name": "BaseBdev4", 00:17:42.366 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:42.366 "is_configured": true, 00:17:42.366 "data_offset": 2048, 00:17:42.366 "data_size": 63488 00:17:42.366 } 00:17:42.366 ] 00:17:42.366 }' 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.366 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.626 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.626 10:40:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.592 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.592 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.592 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.592 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.592 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.593 "name": "raid_bdev1", 00:17:43.593 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:43.593 "strip_size_kb": 64, 00:17:43.593 "state": "online", 00:17:43.593 "raid_level": "raid5f", 00:17:43.593 "superblock": true, 00:17:43.593 "num_base_bdevs": 4, 00:17:43.593 "num_base_bdevs_discovered": 4, 00:17:43.593 "num_base_bdevs_operational": 4, 00:17:43.593 "process": { 00:17:43.593 "type": "rebuild", 00:17:43.593 "target": "spare", 00:17:43.593 "progress": { 00:17:43.593 "blocks": 151680, 00:17:43.593 "percent": 79 00:17:43.593 } 00:17:43.593 }, 00:17:43.593 "base_bdevs_list": [ 00:17:43.593 { 00:17:43.593 "name": "spare", 00:17:43.593 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:43.593 "is_configured": true, 00:17:43.593 "data_offset": 2048, 00:17:43.593 "data_size": 63488 00:17:43.593 }, 00:17:43.593 { 00:17:43.593 "name": "BaseBdev2", 00:17:43.593 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:43.593 "is_configured": true, 00:17:43.593 "data_offset": 2048, 00:17:43.593 "data_size": 63488 00:17:43.593 }, 00:17:43.593 { 00:17:43.593 "name": "BaseBdev3", 00:17:43.593 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:43.593 "is_configured": true, 00:17:43.593 "data_offset": 2048, 00:17:43.593 "data_size": 63488 00:17:43.593 }, 00:17:43.593 { 00:17:43.593 "name": "BaseBdev4", 00:17:43.593 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:43.593 "is_configured": true, 00:17:43.593 "data_offset": 2048, 00:17:43.593 "data_size": 63488 00:17:43.593 } 00:17:43.593 ] 00:17:43.593 }' 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.593 10:40:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.593 10:40:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.593 10:40:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.974 "name": "raid_bdev1", 00:17:44.974 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:44.974 "strip_size_kb": 64, 00:17:44.974 "state": "online", 00:17:44.974 "raid_level": "raid5f", 00:17:44.974 "superblock": true, 00:17:44.974 "num_base_bdevs": 4, 00:17:44.974 "num_base_bdevs_discovered": 4, 00:17:44.974 "num_base_bdevs_operational": 4, 00:17:44.974 "process": { 00:17:44.974 "type": "rebuild", 00:17:44.974 "target": "spare", 00:17:44.974 "progress": { 00:17:44.974 "blocks": 174720, 00:17:44.974 "percent": 91 00:17:44.974 } 00:17:44.974 }, 00:17:44.974 "base_bdevs_list": [ 00:17:44.974 { 00:17:44.974 "name": "spare", 00:17:44.974 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:44.974 "is_configured": true, 00:17:44.974 "data_offset": 2048, 00:17:44.974 "data_size": 63488 00:17:44.974 }, 00:17:44.974 { 00:17:44.974 "name": "BaseBdev2", 00:17:44.974 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:44.974 "is_configured": true, 00:17:44.974 "data_offset": 2048, 00:17:44.974 "data_size": 63488 00:17:44.974 }, 00:17:44.974 { 00:17:44.974 "name": "BaseBdev3", 00:17:44.974 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:44.974 "is_configured": true, 00:17:44.974 "data_offset": 2048, 00:17:44.974 "data_size": 63488 00:17:44.974 }, 00:17:44.974 { 00:17:44.974 "name": "BaseBdev4", 00:17:44.974 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:44.974 "is_configured": true, 00:17:44.974 "data_offset": 2048, 00:17:44.974 "data_size": 63488 00:17:44.974 } 00:17:44.974 ] 00:17:44.974 }' 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.974 10:40:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.544 [2024-11-20 10:40:48.927155] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:45.544 [2024-11-20 10:40:48.927278] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:45.544 [2024-11-20 10:40:48.927454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.804 "name": "raid_bdev1", 00:17:45.804 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:45.804 "strip_size_kb": 64, 00:17:45.804 "state": "online", 00:17:45.804 "raid_level": "raid5f", 00:17:45.804 "superblock": true, 00:17:45.804 "num_base_bdevs": 4, 00:17:45.804 "num_base_bdevs_discovered": 4, 00:17:45.804 "num_base_bdevs_operational": 4, 00:17:45.804 "base_bdevs_list": [ 00:17:45.804 { 00:17:45.804 "name": "spare", 00:17:45.804 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:45.804 "is_configured": true, 00:17:45.804 "data_offset": 2048, 00:17:45.804 "data_size": 63488 00:17:45.804 }, 00:17:45.804 { 00:17:45.804 "name": "BaseBdev2", 00:17:45.804 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:45.804 "is_configured": true, 00:17:45.804 "data_offset": 2048, 00:17:45.804 "data_size": 63488 00:17:45.804 }, 00:17:45.804 { 00:17:45.804 "name": "BaseBdev3", 00:17:45.804 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:45.804 "is_configured": true, 00:17:45.804 "data_offset": 2048, 00:17:45.804 "data_size": 63488 00:17:45.804 }, 00:17:45.804 { 00:17:45.804 "name": "BaseBdev4", 00:17:45.804 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:45.804 "is_configured": true, 00:17:45.804 "data_offset": 2048, 00:17:45.804 "data_size": 63488 00:17:45.804 } 00:17:45.804 ] 00:17:45.804 }' 00:17:45.804 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.064 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.065 "name": "raid_bdev1", 00:17:46.065 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:46.065 "strip_size_kb": 64, 00:17:46.065 "state": "online", 00:17:46.065 "raid_level": "raid5f", 00:17:46.065 "superblock": true, 00:17:46.065 "num_base_bdevs": 4, 00:17:46.065 "num_base_bdevs_discovered": 4, 00:17:46.065 "num_base_bdevs_operational": 4, 00:17:46.065 "base_bdevs_list": [ 00:17:46.065 { 00:17:46.065 "name": "spare", 00:17:46.065 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 }, 00:17:46.065 { 00:17:46.065 "name": "BaseBdev2", 00:17:46.065 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 }, 00:17:46.065 { 00:17:46.065 "name": "BaseBdev3", 00:17:46.065 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 }, 00:17:46.065 { 00:17:46.065 "name": "BaseBdev4", 00:17:46.065 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 } 00:17:46.065 ] 00:17:46.065 }' 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.065 "name": "raid_bdev1", 00:17:46.065 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:46.065 "strip_size_kb": 64, 00:17:46.065 "state": "online", 00:17:46.065 "raid_level": "raid5f", 00:17:46.065 "superblock": true, 00:17:46.065 "num_base_bdevs": 4, 00:17:46.065 "num_base_bdevs_discovered": 4, 00:17:46.065 "num_base_bdevs_operational": 4, 00:17:46.065 "base_bdevs_list": [ 00:17:46.065 { 00:17:46.065 "name": "spare", 00:17:46.065 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 }, 00:17:46.065 { 00:17:46.065 "name": "BaseBdev2", 00:17:46.065 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 }, 00:17:46.065 { 00:17:46.065 "name": "BaseBdev3", 00:17:46.065 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 }, 00:17:46.065 { 00:17:46.065 "name": "BaseBdev4", 00:17:46.065 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:46.065 "is_configured": true, 00:17:46.065 "data_offset": 2048, 00:17:46.065 "data_size": 63488 00:17:46.065 } 00:17:46.065 ] 00:17:46.065 }' 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.065 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.637 [2024-11-20 10:40:49.962923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.637 [2024-11-20 10:40:49.962952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.637 [2024-11-20 10:40:49.963026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.637 [2024-11-20 10:40:49.963118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.637 [2024-11-20 10:40:49.963139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.637 10:40:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.637 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:46.898 /dev/nbd0 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.898 1+0 records in 00:17:46.898 1+0 records out 00:17:46.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349529 s, 11.7 MB/s 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.898 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:47.158 /dev/nbd1 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.158 1+0 records in 00:17:47.158 1+0 records out 00:17:47.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347163 s, 11.8 MB/s 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.158 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.418 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.678 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.678 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:47.678 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.678 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.678 10:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.679 [2024-11-20 10:40:51.123650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.679 [2024-11-20 10:40:51.123710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.679 [2024-11-20 10:40:51.123737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:47.679 [2024-11-20 10:40:51.123746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.679 [2024-11-20 10:40:51.125972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.679 [2024-11-20 10:40:51.126060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.679 [2024-11-20 10:40:51.126173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.679 [2024-11-20 10:40:51.126239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.679 [2024-11-20 10:40:51.126401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.679 [2024-11-20 10:40:51.126490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.679 [2024-11-20 10:40:51.126563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.679 spare 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.679 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.939 [2024-11-20 10:40:51.226461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:47.939 [2024-11-20 10:40:51.226489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:47.939 [2024-11-20 10:40:51.226725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:47.939 [2024-11-20 10:40:51.233317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:47.939 [2024-11-20 10:40:51.233336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:47.939 [2024-11-20 10:40:51.233521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.939 "name": "raid_bdev1", 00:17:47.939 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:47.939 "strip_size_kb": 64, 00:17:47.939 "state": "online", 00:17:47.939 "raid_level": "raid5f", 00:17:47.939 "superblock": true, 00:17:47.939 "num_base_bdevs": 4, 00:17:47.939 "num_base_bdevs_discovered": 4, 00:17:47.939 "num_base_bdevs_operational": 4, 00:17:47.939 "base_bdevs_list": [ 00:17:47.939 { 00:17:47.939 "name": "spare", 00:17:47.939 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:47.939 "is_configured": true, 00:17:47.939 "data_offset": 2048, 00:17:47.939 "data_size": 63488 00:17:47.939 }, 00:17:47.939 { 00:17:47.939 "name": "BaseBdev2", 00:17:47.939 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:47.939 "is_configured": true, 00:17:47.939 "data_offset": 2048, 00:17:47.939 "data_size": 63488 00:17:47.939 }, 00:17:47.939 { 00:17:47.939 "name": "BaseBdev3", 00:17:47.939 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:47.939 "is_configured": true, 00:17:47.939 "data_offset": 2048, 00:17:47.939 "data_size": 63488 00:17:47.939 }, 00:17:47.939 { 00:17:47.939 "name": "BaseBdev4", 00:17:47.939 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:47.939 "is_configured": true, 00:17:47.939 "data_offset": 2048, 00:17:47.939 "data_size": 63488 00:17:47.939 } 00:17:47.939 ] 00:17:47.939 }' 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.939 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.515 "name": "raid_bdev1", 00:17:48.515 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:48.515 "strip_size_kb": 64, 00:17:48.515 "state": "online", 00:17:48.515 "raid_level": "raid5f", 00:17:48.515 "superblock": true, 00:17:48.515 "num_base_bdevs": 4, 00:17:48.515 "num_base_bdevs_discovered": 4, 00:17:48.515 "num_base_bdevs_operational": 4, 00:17:48.515 "base_bdevs_list": [ 00:17:48.515 { 00:17:48.515 "name": "spare", 00:17:48.515 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:48.515 "is_configured": true, 00:17:48.515 "data_offset": 2048, 00:17:48.515 "data_size": 63488 00:17:48.515 }, 00:17:48.515 { 00:17:48.515 "name": "BaseBdev2", 00:17:48.515 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:48.515 "is_configured": true, 00:17:48.515 "data_offset": 2048, 00:17:48.515 "data_size": 63488 00:17:48.515 }, 00:17:48.515 { 00:17:48.515 "name": "BaseBdev3", 00:17:48.515 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:48.515 "is_configured": true, 00:17:48.515 "data_offset": 2048, 00:17:48.515 "data_size": 63488 00:17:48.515 }, 00:17:48.515 { 00:17:48.515 "name": "BaseBdev4", 00:17:48.515 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:48.515 "is_configured": true, 00:17:48.515 "data_offset": 2048, 00:17:48.515 "data_size": 63488 00:17:48.515 } 00:17:48.515 ] 00:17:48.515 }' 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.515 [2024-11-20 10:40:51.904466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.515 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.516 "name": "raid_bdev1", 00:17:48.516 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:48.516 "strip_size_kb": 64, 00:17:48.516 "state": "online", 00:17:48.516 "raid_level": "raid5f", 00:17:48.516 "superblock": true, 00:17:48.516 "num_base_bdevs": 4, 00:17:48.516 "num_base_bdevs_discovered": 3, 00:17:48.516 "num_base_bdevs_operational": 3, 00:17:48.516 "base_bdevs_list": [ 00:17:48.516 { 00:17:48.516 "name": null, 00:17:48.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.516 "is_configured": false, 00:17:48.516 "data_offset": 0, 00:17:48.516 "data_size": 63488 00:17:48.516 }, 00:17:48.516 { 00:17:48.516 "name": "BaseBdev2", 00:17:48.516 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:48.516 "is_configured": true, 00:17:48.516 "data_offset": 2048, 00:17:48.516 "data_size": 63488 00:17:48.516 }, 00:17:48.516 { 00:17:48.516 "name": "BaseBdev3", 00:17:48.516 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:48.516 "is_configured": true, 00:17:48.516 "data_offset": 2048, 00:17:48.516 "data_size": 63488 00:17:48.516 }, 00:17:48.516 { 00:17:48.516 "name": "BaseBdev4", 00:17:48.516 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:48.516 "is_configured": true, 00:17:48.516 "data_offset": 2048, 00:17:48.516 "data_size": 63488 00:17:48.516 } 00:17:48.516 ] 00:17:48.516 }' 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.516 10:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.093 10:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.093 10:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.093 10:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.093 [2024-11-20 10:40:52.371687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.093 [2024-11-20 10:40:52.371904] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.093 [2024-11-20 10:40:52.371970] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:49.093 [2024-11-20 10:40:52.372027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.093 [2024-11-20 10:40:52.386565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:49.093 10:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.093 10:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:49.093 [2024-11-20 10:40:52.395673] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.032 "name": "raid_bdev1", 00:17:50.032 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:50.032 "strip_size_kb": 64, 00:17:50.032 "state": "online", 00:17:50.032 "raid_level": "raid5f", 00:17:50.032 "superblock": true, 00:17:50.032 "num_base_bdevs": 4, 00:17:50.032 "num_base_bdevs_discovered": 4, 00:17:50.032 "num_base_bdevs_operational": 4, 00:17:50.032 "process": { 00:17:50.032 "type": "rebuild", 00:17:50.032 "target": "spare", 00:17:50.032 "progress": { 00:17:50.032 "blocks": 19200, 00:17:50.032 "percent": 10 00:17:50.032 } 00:17:50.032 }, 00:17:50.032 "base_bdevs_list": [ 00:17:50.032 { 00:17:50.032 "name": "spare", 00:17:50.032 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:50.032 "is_configured": true, 00:17:50.032 "data_offset": 2048, 00:17:50.032 "data_size": 63488 00:17:50.032 }, 00:17:50.032 { 00:17:50.032 "name": "BaseBdev2", 00:17:50.032 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:50.032 "is_configured": true, 00:17:50.032 "data_offset": 2048, 00:17:50.032 "data_size": 63488 00:17:50.032 }, 00:17:50.032 { 00:17:50.032 "name": "BaseBdev3", 00:17:50.032 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:50.032 "is_configured": true, 00:17:50.032 "data_offset": 2048, 00:17:50.032 "data_size": 63488 00:17:50.032 }, 00:17:50.032 { 00:17:50.032 "name": "BaseBdev4", 00:17:50.032 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:50.032 "is_configured": true, 00:17:50.032 "data_offset": 2048, 00:17:50.032 "data_size": 63488 00:17:50.032 } 00:17:50.032 ] 00:17:50.032 }' 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.032 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.293 [2024-11-20 10:40:53.555276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.293 [2024-11-20 10:40:53.601335] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:50.293 [2024-11-20 10:40:53.601494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.293 [2024-11-20 10:40:53.601554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.293 [2024-11-20 10:40:53.601579] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.293 "name": "raid_bdev1", 00:17:50.293 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:50.293 "strip_size_kb": 64, 00:17:50.293 "state": "online", 00:17:50.293 "raid_level": "raid5f", 00:17:50.293 "superblock": true, 00:17:50.293 "num_base_bdevs": 4, 00:17:50.293 "num_base_bdevs_discovered": 3, 00:17:50.293 "num_base_bdevs_operational": 3, 00:17:50.293 "base_bdevs_list": [ 00:17:50.293 { 00:17:50.293 "name": null, 00:17:50.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.293 "is_configured": false, 00:17:50.293 "data_offset": 0, 00:17:50.293 "data_size": 63488 00:17:50.293 }, 00:17:50.293 { 00:17:50.293 "name": "BaseBdev2", 00:17:50.293 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:50.293 "is_configured": true, 00:17:50.293 "data_offset": 2048, 00:17:50.293 "data_size": 63488 00:17:50.293 }, 00:17:50.293 { 00:17:50.293 "name": "BaseBdev3", 00:17:50.293 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:50.293 "is_configured": true, 00:17:50.293 "data_offset": 2048, 00:17:50.293 "data_size": 63488 00:17:50.293 }, 00:17:50.293 { 00:17:50.293 "name": "BaseBdev4", 00:17:50.293 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:50.293 "is_configured": true, 00:17:50.293 "data_offset": 2048, 00:17:50.293 "data_size": 63488 00:17:50.293 } 00:17:50.293 ] 00:17:50.293 }' 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.293 10:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.863 10:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:50.863 10:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.863 10:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.863 [2024-11-20 10:40:54.103640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:50.863 [2024-11-20 10:40:54.103750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.863 [2024-11-20 10:40:54.103796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:50.863 [2024-11-20 10:40:54.103827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.863 [2024-11-20 10:40:54.104331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.863 [2024-11-20 10:40:54.104418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:50.864 [2024-11-20 10:40:54.104540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:50.864 [2024-11-20 10:40:54.104584] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.864 [2024-11-20 10:40:54.104624] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.864 [2024-11-20 10:40:54.104669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.864 [2024-11-20 10:40:54.118919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:50.864 spare 00:17:50.864 10:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.864 10:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:50.864 [2024-11-20 10:40:54.127488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.802 "name": "raid_bdev1", 00:17:51.802 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:51.802 "strip_size_kb": 64, 00:17:51.802 "state": "online", 00:17:51.802 "raid_level": "raid5f", 00:17:51.802 "superblock": true, 00:17:51.802 "num_base_bdevs": 4, 00:17:51.802 "num_base_bdevs_discovered": 4, 00:17:51.802 "num_base_bdevs_operational": 4, 00:17:51.802 "process": { 00:17:51.802 "type": "rebuild", 00:17:51.802 "target": "spare", 00:17:51.802 "progress": { 00:17:51.802 "blocks": 19200, 00:17:51.802 "percent": 10 00:17:51.802 } 00:17:51.802 }, 00:17:51.802 "base_bdevs_list": [ 00:17:51.802 { 00:17:51.802 "name": "spare", 00:17:51.802 "uuid": "20f8951d-f014-50df-9f82-f37d8a591c38", 00:17:51.802 "is_configured": true, 00:17:51.802 "data_offset": 2048, 00:17:51.802 "data_size": 63488 00:17:51.802 }, 00:17:51.802 { 00:17:51.802 "name": "BaseBdev2", 00:17:51.802 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:51.802 "is_configured": true, 00:17:51.802 "data_offset": 2048, 00:17:51.802 "data_size": 63488 00:17:51.802 }, 00:17:51.802 { 00:17:51.802 "name": "BaseBdev3", 00:17:51.802 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:51.802 "is_configured": true, 00:17:51.802 "data_offset": 2048, 00:17:51.802 "data_size": 63488 00:17:51.802 }, 00:17:51.802 { 00:17:51.802 "name": "BaseBdev4", 00:17:51.802 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:51.802 "is_configured": true, 00:17:51.802 "data_offset": 2048, 00:17:51.802 "data_size": 63488 00:17:51.802 } 00:17:51.802 ] 00:17:51.802 }' 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.802 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.062 [2024-11-20 10:40:55.278152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.062 [2024-11-20 10:40:55.333128] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.062 [2024-11-20 10:40:55.333182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.062 [2024-11-20 10:40:55.333218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.062 [2024-11-20 10:40:55.333225] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.062 "name": "raid_bdev1", 00:17:52.062 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:52.062 "strip_size_kb": 64, 00:17:52.062 "state": "online", 00:17:52.062 "raid_level": "raid5f", 00:17:52.062 "superblock": true, 00:17:52.062 "num_base_bdevs": 4, 00:17:52.062 "num_base_bdevs_discovered": 3, 00:17:52.062 "num_base_bdevs_operational": 3, 00:17:52.062 "base_bdevs_list": [ 00:17:52.062 { 00:17:52.062 "name": null, 00:17:52.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.062 "is_configured": false, 00:17:52.062 "data_offset": 0, 00:17:52.062 "data_size": 63488 00:17:52.062 }, 00:17:52.062 { 00:17:52.062 "name": "BaseBdev2", 00:17:52.062 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:52.062 "is_configured": true, 00:17:52.062 "data_offset": 2048, 00:17:52.062 "data_size": 63488 00:17:52.062 }, 00:17:52.062 { 00:17:52.062 "name": "BaseBdev3", 00:17:52.062 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:52.062 "is_configured": true, 00:17:52.062 "data_offset": 2048, 00:17:52.062 "data_size": 63488 00:17:52.062 }, 00:17:52.062 { 00:17:52.062 "name": "BaseBdev4", 00:17:52.062 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:52.062 "is_configured": true, 00:17:52.062 "data_offset": 2048, 00:17:52.062 "data_size": 63488 00:17:52.062 } 00:17:52.062 ] 00:17:52.062 }' 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.062 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.322 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.582 "name": "raid_bdev1", 00:17:52.582 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:52.582 "strip_size_kb": 64, 00:17:52.582 "state": "online", 00:17:52.582 "raid_level": "raid5f", 00:17:52.582 "superblock": true, 00:17:52.582 "num_base_bdevs": 4, 00:17:52.582 "num_base_bdevs_discovered": 3, 00:17:52.582 "num_base_bdevs_operational": 3, 00:17:52.582 "base_bdevs_list": [ 00:17:52.582 { 00:17:52.582 "name": null, 00:17:52.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.582 "is_configured": false, 00:17:52.582 "data_offset": 0, 00:17:52.582 "data_size": 63488 00:17:52.582 }, 00:17:52.582 { 00:17:52.582 "name": "BaseBdev2", 00:17:52.582 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:52.582 "is_configured": true, 00:17:52.582 "data_offset": 2048, 00:17:52.582 "data_size": 63488 00:17:52.582 }, 00:17:52.582 { 00:17:52.582 "name": "BaseBdev3", 00:17:52.582 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:52.582 "is_configured": true, 00:17:52.582 "data_offset": 2048, 00:17:52.582 "data_size": 63488 00:17:52.582 }, 00:17:52.582 { 00:17:52.582 "name": "BaseBdev4", 00:17:52.582 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:52.582 "is_configured": true, 00:17:52.582 "data_offset": 2048, 00:17:52.582 "data_size": 63488 00:17:52.582 } 00:17:52.582 ] 00:17:52.582 }' 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.582 [2024-11-20 10:40:55.952561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:52.582 [2024-11-20 10:40:55.952624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.582 [2024-11-20 10:40:55.952660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:52.582 [2024-11-20 10:40:55.952668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.582 [2024-11-20 10:40:55.953114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.582 [2024-11-20 10:40:55.953132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:52.582 [2024-11-20 10:40:55.953206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:52.582 [2024-11-20 10:40:55.953219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.582 [2024-11-20 10:40:55.953231] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:52.582 [2024-11-20 10:40:55.953242] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:52.582 BaseBdev1 00:17:52.582 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.583 10:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.522 10:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.781 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.781 "name": "raid_bdev1", 00:17:53.781 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:53.782 "strip_size_kb": 64, 00:17:53.782 "state": "online", 00:17:53.782 "raid_level": "raid5f", 00:17:53.782 "superblock": true, 00:17:53.782 "num_base_bdevs": 4, 00:17:53.782 "num_base_bdevs_discovered": 3, 00:17:53.782 "num_base_bdevs_operational": 3, 00:17:53.782 "base_bdevs_list": [ 00:17:53.782 { 00:17:53.782 "name": null, 00:17:53.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.782 "is_configured": false, 00:17:53.782 "data_offset": 0, 00:17:53.782 "data_size": 63488 00:17:53.782 }, 00:17:53.782 { 00:17:53.782 "name": "BaseBdev2", 00:17:53.782 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:53.782 "is_configured": true, 00:17:53.782 "data_offset": 2048, 00:17:53.782 "data_size": 63488 00:17:53.782 }, 00:17:53.782 { 00:17:53.782 "name": "BaseBdev3", 00:17:53.782 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:53.782 "is_configured": true, 00:17:53.782 "data_offset": 2048, 00:17:53.782 "data_size": 63488 00:17:53.782 }, 00:17:53.782 { 00:17:53.782 "name": "BaseBdev4", 00:17:53.782 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:53.782 "is_configured": true, 00:17:53.782 "data_offset": 2048, 00:17:53.782 "data_size": 63488 00:17:53.782 } 00:17:53.782 ] 00:17:53.782 }' 00:17:53.782 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.782 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.042 "name": "raid_bdev1", 00:17:54.042 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:54.042 "strip_size_kb": 64, 00:17:54.042 "state": "online", 00:17:54.042 "raid_level": "raid5f", 00:17:54.042 "superblock": true, 00:17:54.042 "num_base_bdevs": 4, 00:17:54.042 "num_base_bdevs_discovered": 3, 00:17:54.042 "num_base_bdevs_operational": 3, 00:17:54.042 "base_bdevs_list": [ 00:17:54.042 { 00:17:54.042 "name": null, 00:17:54.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.042 "is_configured": false, 00:17:54.042 "data_offset": 0, 00:17:54.042 "data_size": 63488 00:17:54.042 }, 00:17:54.042 { 00:17:54.042 "name": "BaseBdev2", 00:17:54.042 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:54.042 "is_configured": true, 00:17:54.042 "data_offset": 2048, 00:17:54.042 "data_size": 63488 00:17:54.042 }, 00:17:54.042 { 00:17:54.042 "name": "BaseBdev3", 00:17:54.042 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:54.042 "is_configured": true, 00:17:54.042 "data_offset": 2048, 00:17:54.042 "data_size": 63488 00:17:54.042 }, 00:17:54.042 { 00:17:54.042 "name": "BaseBdev4", 00:17:54.042 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:54.042 "is_configured": true, 00:17:54.042 "data_offset": 2048, 00:17:54.042 "data_size": 63488 00:17:54.042 } 00:17:54.042 ] 00:17:54.042 }' 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.042 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.302 [2024-11-20 10:40:57.541917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.302 [2024-11-20 10:40:57.542072] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:54.302 [2024-11-20 10:40:57.542089] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:54.302 request: 00:17:54.302 { 00:17:54.302 "base_bdev": "BaseBdev1", 00:17:54.302 "raid_bdev": "raid_bdev1", 00:17:54.302 "method": "bdev_raid_add_base_bdev", 00:17:54.302 "req_id": 1 00:17:54.302 } 00:17:54.302 Got JSON-RPC error response 00:17:54.302 response: 00:17:54.302 { 00:17:54.302 "code": -22, 00:17:54.302 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:54.302 } 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.302 10:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.241 "name": "raid_bdev1", 00:17:55.241 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:55.241 "strip_size_kb": 64, 00:17:55.241 "state": "online", 00:17:55.241 "raid_level": "raid5f", 00:17:55.241 "superblock": true, 00:17:55.241 "num_base_bdevs": 4, 00:17:55.241 "num_base_bdevs_discovered": 3, 00:17:55.241 "num_base_bdevs_operational": 3, 00:17:55.241 "base_bdevs_list": [ 00:17:55.241 { 00:17:55.241 "name": null, 00:17:55.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.241 "is_configured": false, 00:17:55.241 "data_offset": 0, 00:17:55.241 "data_size": 63488 00:17:55.241 }, 00:17:55.241 { 00:17:55.241 "name": "BaseBdev2", 00:17:55.241 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:55.241 "is_configured": true, 00:17:55.241 "data_offset": 2048, 00:17:55.241 "data_size": 63488 00:17:55.241 }, 00:17:55.241 { 00:17:55.241 "name": "BaseBdev3", 00:17:55.241 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:55.241 "is_configured": true, 00:17:55.241 "data_offset": 2048, 00:17:55.241 "data_size": 63488 00:17:55.241 }, 00:17:55.241 { 00:17:55.241 "name": "BaseBdev4", 00:17:55.241 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:55.241 "is_configured": true, 00:17:55.241 "data_offset": 2048, 00:17:55.241 "data_size": 63488 00:17:55.241 } 00:17:55.241 ] 00:17:55.241 }' 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.241 10:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.816 "name": "raid_bdev1", 00:17:55.816 "uuid": "daa8233a-f2f2-43e8-8127-cf1f959e441b", 00:17:55.816 "strip_size_kb": 64, 00:17:55.816 "state": "online", 00:17:55.816 "raid_level": "raid5f", 00:17:55.816 "superblock": true, 00:17:55.816 "num_base_bdevs": 4, 00:17:55.816 "num_base_bdevs_discovered": 3, 00:17:55.816 "num_base_bdevs_operational": 3, 00:17:55.816 "base_bdevs_list": [ 00:17:55.816 { 00:17:55.816 "name": null, 00:17:55.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.816 "is_configured": false, 00:17:55.816 "data_offset": 0, 00:17:55.816 "data_size": 63488 00:17:55.816 }, 00:17:55.816 { 00:17:55.816 "name": "BaseBdev2", 00:17:55.816 "uuid": "cc113672-d7c8-5b79-b4f8-2591c56503f8", 00:17:55.816 "is_configured": true, 00:17:55.816 "data_offset": 2048, 00:17:55.816 "data_size": 63488 00:17:55.816 }, 00:17:55.816 { 00:17:55.816 "name": "BaseBdev3", 00:17:55.816 "uuid": "fe80fcb9-657e-5be5-a6a1-4b9dd32aa134", 00:17:55.816 "is_configured": true, 00:17:55.816 "data_offset": 2048, 00:17:55.816 "data_size": 63488 00:17:55.816 }, 00:17:55.816 { 00:17:55.816 "name": "BaseBdev4", 00:17:55.816 "uuid": "d5af59da-95ce-5b88-bd5f-c68dc886c036", 00:17:55.816 "is_configured": true, 00:17:55.816 "data_offset": 2048, 00:17:55.816 "data_size": 63488 00:17:55.816 } 00:17:55.816 ] 00:17:55.816 }' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85272 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85272 ']' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85272 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85272 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85272' 00:17:55.816 killing process with pid 85272 00:17:55.816 Received shutdown signal, test time was about 60.000000 seconds 00:17:55.816 00:17:55.816 Latency(us) 00:17:55.816 [2024-11-20T10:40:59.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.816 [2024-11-20T10:40:59.295Z] =================================================================================================================== 00:17:55.816 [2024-11-20T10:40:59.295Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85272 00:17:55.816 [2024-11-20 10:40:59.161168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.816 [2024-11-20 10:40:59.161283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.816 [2024-11-20 10:40:59.161368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.816 [2024-11-20 10:40:59.161381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:55.816 10:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85272 00:17:56.391 [2024-11-20 10:40:59.617492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.337 10:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:57.337 00:17:57.337 real 0m26.722s 00:17:57.337 user 0m33.606s 00:17:57.337 sys 0m2.881s 00:17:57.337 10:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.337 10:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.337 ************************************ 00:17:57.337 END TEST raid5f_rebuild_test_sb 00:17:57.337 ************************************ 00:17:57.337 10:41:00 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:57.337 10:41:00 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:57.337 10:41:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:57.337 10:41:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.337 10:41:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.337 ************************************ 00:17:57.337 START TEST raid_state_function_test_sb_4k 00:17:57.337 ************************************ 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:57.337 Process raid pid: 86082 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86082 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86082' 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86082 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86082 ']' 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.337 10:41:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.609 [2024-11-20 10:41:00.813896] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:57.609 [2024-11-20 10:41:00.813995] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.609 [2024-11-20 10:41:00.986690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.868 [2024-11-20 10:41:01.094093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.868 [2024-11-20 10:41:01.291437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.868 [2024-11-20 10:41:01.291484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.437 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.438 [2024-11-20 10:41:01.654536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.438 [2024-11-20 10:41:01.654640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.438 [2024-11-20 10:41:01.654657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.438 [2024-11-20 10:41:01.654668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.438 "name": "Existed_Raid", 00:17:58.438 "uuid": "ce639983-6161-43aa-a2f5-996ab0add329", 00:17:58.438 "strip_size_kb": 0, 00:17:58.438 "state": "configuring", 00:17:58.438 "raid_level": "raid1", 00:17:58.438 "superblock": true, 00:17:58.438 "num_base_bdevs": 2, 00:17:58.438 "num_base_bdevs_discovered": 0, 00:17:58.438 "num_base_bdevs_operational": 2, 00:17:58.438 "base_bdevs_list": [ 00:17:58.438 { 00:17:58.438 "name": "BaseBdev1", 00:17:58.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.438 "is_configured": false, 00:17:58.438 "data_offset": 0, 00:17:58.438 "data_size": 0 00:17:58.438 }, 00:17:58.438 { 00:17:58.438 "name": "BaseBdev2", 00:17:58.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.438 "is_configured": false, 00:17:58.438 "data_offset": 0, 00:17:58.438 "data_size": 0 00:17:58.438 } 00:17:58.438 ] 00:17:58.438 }' 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.438 10:41:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.698 [2024-11-20 10:41:02.141610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.698 [2024-11-20 10:41:02.141683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.698 [2024-11-20 10:41:02.153594] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.698 [2024-11-20 10:41:02.153677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.698 [2024-11-20 10:41:02.153704] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.698 [2024-11-20 10:41:02.153728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.698 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.957 [2024-11-20 10:41:02.201232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.957 BaseBdev1 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.957 [ 00:17:58.957 { 00:17:58.957 "name": "BaseBdev1", 00:17:58.957 "aliases": [ 00:17:58.957 "03f5b786-3da8-4535-b70c-a6d757e17682" 00:17:58.957 ], 00:17:58.957 "product_name": "Malloc disk", 00:17:58.957 "block_size": 4096, 00:17:58.957 "num_blocks": 8192, 00:17:58.957 "uuid": "03f5b786-3da8-4535-b70c-a6d757e17682", 00:17:58.957 "assigned_rate_limits": { 00:17:58.957 "rw_ios_per_sec": 0, 00:17:58.957 "rw_mbytes_per_sec": 0, 00:17:58.957 "r_mbytes_per_sec": 0, 00:17:58.957 "w_mbytes_per_sec": 0 00:17:58.957 }, 00:17:58.957 "claimed": true, 00:17:58.957 "claim_type": "exclusive_write", 00:17:58.957 "zoned": false, 00:17:58.957 "supported_io_types": { 00:17:58.957 "read": true, 00:17:58.957 "write": true, 00:17:58.957 "unmap": true, 00:17:58.957 "flush": true, 00:17:58.957 "reset": true, 00:17:58.957 "nvme_admin": false, 00:17:58.957 "nvme_io": false, 00:17:58.957 "nvme_io_md": false, 00:17:58.957 "write_zeroes": true, 00:17:58.957 "zcopy": true, 00:17:58.957 "get_zone_info": false, 00:17:58.957 "zone_management": false, 00:17:58.957 "zone_append": false, 00:17:58.957 "compare": false, 00:17:58.957 "compare_and_write": false, 00:17:58.957 "abort": true, 00:17:58.957 "seek_hole": false, 00:17:58.957 "seek_data": false, 00:17:58.957 "copy": true, 00:17:58.957 "nvme_iov_md": false 00:17:58.957 }, 00:17:58.957 "memory_domains": [ 00:17:58.957 { 00:17:58.957 "dma_device_id": "system", 00:17:58.957 "dma_device_type": 1 00:17:58.957 }, 00:17:58.957 { 00:17:58.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.957 "dma_device_type": 2 00:17:58.957 } 00:17:58.957 ], 00:17:58.957 "driver_specific": {} 00:17:58.957 } 00:17:58.957 ] 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.957 "name": "Existed_Raid", 00:17:58.957 "uuid": "ca8d0cc4-2f33-4e0f-bd86-e6eebc6be970", 00:17:58.957 "strip_size_kb": 0, 00:17:58.957 "state": "configuring", 00:17:58.957 "raid_level": "raid1", 00:17:58.957 "superblock": true, 00:17:58.957 "num_base_bdevs": 2, 00:17:58.957 "num_base_bdevs_discovered": 1, 00:17:58.957 "num_base_bdevs_operational": 2, 00:17:58.957 "base_bdevs_list": [ 00:17:58.957 { 00:17:58.957 "name": "BaseBdev1", 00:17:58.957 "uuid": "03f5b786-3da8-4535-b70c-a6d757e17682", 00:17:58.957 "is_configured": true, 00:17:58.957 "data_offset": 256, 00:17:58.957 "data_size": 7936 00:17:58.957 }, 00:17:58.957 { 00:17:58.957 "name": "BaseBdev2", 00:17:58.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.957 "is_configured": false, 00:17:58.957 "data_offset": 0, 00:17:58.957 "data_size": 0 00:17:58.957 } 00:17:58.957 ] 00:17:58.957 }' 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.957 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.217 [2024-11-20 10:41:02.668435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.217 [2024-11-20 10:41:02.668473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.217 [2024-11-20 10:41:02.680471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.217 [2024-11-20 10:41:02.682238] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.217 [2024-11-20 10:41:02.682279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.217 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.476 "name": "Existed_Raid", 00:17:59.476 "uuid": "7e2ecf47-2b39-4c9b-8908-3e182898c19e", 00:17:59.476 "strip_size_kb": 0, 00:17:59.476 "state": "configuring", 00:17:59.476 "raid_level": "raid1", 00:17:59.476 "superblock": true, 00:17:59.476 "num_base_bdevs": 2, 00:17:59.476 "num_base_bdevs_discovered": 1, 00:17:59.476 "num_base_bdevs_operational": 2, 00:17:59.476 "base_bdevs_list": [ 00:17:59.476 { 00:17:59.476 "name": "BaseBdev1", 00:17:59.476 "uuid": "03f5b786-3da8-4535-b70c-a6d757e17682", 00:17:59.476 "is_configured": true, 00:17:59.476 "data_offset": 256, 00:17:59.476 "data_size": 7936 00:17:59.476 }, 00:17:59.476 { 00:17:59.476 "name": "BaseBdev2", 00:17:59.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.476 "is_configured": false, 00:17:59.476 "data_offset": 0, 00:17:59.476 "data_size": 0 00:17:59.476 } 00:17:59.476 ] 00:17:59.476 }' 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.476 10:41:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.760 [2024-11-20 10:41:03.228470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.760 [2024-11-20 10:41:03.228826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:59.760 [2024-11-20 10:41:03.228878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:59.760 [2024-11-20 10:41:03.229162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:59.760 [2024-11-20 10:41:03.229376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:59.760 [2024-11-20 10:41:03.229423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:59.760 BaseBdev2 00:17:59.760 [2024-11-20 10:41:03.229616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.760 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.020 [ 00:18:00.020 { 00:18:00.020 "name": "BaseBdev2", 00:18:00.020 "aliases": [ 00:18:00.020 "859f47f7-65c3-48a0-8bc7-2e073ac91419" 00:18:00.020 ], 00:18:00.020 "product_name": "Malloc disk", 00:18:00.020 "block_size": 4096, 00:18:00.020 "num_blocks": 8192, 00:18:00.020 "uuid": "859f47f7-65c3-48a0-8bc7-2e073ac91419", 00:18:00.020 "assigned_rate_limits": { 00:18:00.020 "rw_ios_per_sec": 0, 00:18:00.020 "rw_mbytes_per_sec": 0, 00:18:00.020 "r_mbytes_per_sec": 0, 00:18:00.020 "w_mbytes_per_sec": 0 00:18:00.020 }, 00:18:00.020 "claimed": true, 00:18:00.020 "claim_type": "exclusive_write", 00:18:00.020 "zoned": false, 00:18:00.020 "supported_io_types": { 00:18:00.020 "read": true, 00:18:00.020 "write": true, 00:18:00.020 "unmap": true, 00:18:00.020 "flush": true, 00:18:00.020 "reset": true, 00:18:00.020 "nvme_admin": false, 00:18:00.020 "nvme_io": false, 00:18:00.020 "nvme_io_md": false, 00:18:00.020 "write_zeroes": true, 00:18:00.020 "zcopy": true, 00:18:00.020 "get_zone_info": false, 00:18:00.020 "zone_management": false, 00:18:00.020 "zone_append": false, 00:18:00.020 "compare": false, 00:18:00.020 "compare_and_write": false, 00:18:00.020 "abort": true, 00:18:00.020 "seek_hole": false, 00:18:00.020 "seek_data": false, 00:18:00.020 "copy": true, 00:18:00.020 "nvme_iov_md": false 00:18:00.020 }, 00:18:00.020 "memory_domains": [ 00:18:00.020 { 00:18:00.020 "dma_device_id": "system", 00:18:00.020 "dma_device_type": 1 00:18:00.020 }, 00:18:00.020 { 00:18:00.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.020 "dma_device_type": 2 00:18:00.020 } 00:18:00.020 ], 00:18:00.020 "driver_specific": {} 00:18:00.020 } 00:18:00.020 ] 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.020 "name": "Existed_Raid", 00:18:00.020 "uuid": "7e2ecf47-2b39-4c9b-8908-3e182898c19e", 00:18:00.020 "strip_size_kb": 0, 00:18:00.020 "state": "online", 00:18:00.020 "raid_level": "raid1", 00:18:00.020 "superblock": true, 00:18:00.020 "num_base_bdevs": 2, 00:18:00.020 "num_base_bdevs_discovered": 2, 00:18:00.020 "num_base_bdevs_operational": 2, 00:18:00.020 "base_bdevs_list": [ 00:18:00.020 { 00:18:00.020 "name": "BaseBdev1", 00:18:00.020 "uuid": "03f5b786-3da8-4535-b70c-a6d757e17682", 00:18:00.020 "is_configured": true, 00:18:00.020 "data_offset": 256, 00:18:00.020 "data_size": 7936 00:18:00.020 }, 00:18:00.020 { 00:18:00.020 "name": "BaseBdev2", 00:18:00.020 "uuid": "859f47f7-65c3-48a0-8bc7-2e073ac91419", 00:18:00.020 "is_configured": true, 00:18:00.020 "data_offset": 256, 00:18:00.020 "data_size": 7936 00:18:00.020 } 00:18:00.020 ] 00:18:00.020 }' 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.020 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.279 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:00.539 [2024-11-20 10:41:03.759935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.539 "name": "Existed_Raid", 00:18:00.539 "aliases": [ 00:18:00.539 "7e2ecf47-2b39-4c9b-8908-3e182898c19e" 00:18:00.539 ], 00:18:00.539 "product_name": "Raid Volume", 00:18:00.539 "block_size": 4096, 00:18:00.539 "num_blocks": 7936, 00:18:00.539 "uuid": "7e2ecf47-2b39-4c9b-8908-3e182898c19e", 00:18:00.539 "assigned_rate_limits": { 00:18:00.539 "rw_ios_per_sec": 0, 00:18:00.539 "rw_mbytes_per_sec": 0, 00:18:00.539 "r_mbytes_per_sec": 0, 00:18:00.539 "w_mbytes_per_sec": 0 00:18:00.539 }, 00:18:00.539 "claimed": false, 00:18:00.539 "zoned": false, 00:18:00.539 "supported_io_types": { 00:18:00.539 "read": true, 00:18:00.539 "write": true, 00:18:00.539 "unmap": false, 00:18:00.539 "flush": false, 00:18:00.539 "reset": true, 00:18:00.539 "nvme_admin": false, 00:18:00.539 "nvme_io": false, 00:18:00.539 "nvme_io_md": false, 00:18:00.539 "write_zeroes": true, 00:18:00.539 "zcopy": false, 00:18:00.539 "get_zone_info": false, 00:18:00.539 "zone_management": false, 00:18:00.539 "zone_append": false, 00:18:00.539 "compare": false, 00:18:00.539 "compare_and_write": false, 00:18:00.539 "abort": false, 00:18:00.539 "seek_hole": false, 00:18:00.539 "seek_data": false, 00:18:00.539 "copy": false, 00:18:00.539 "nvme_iov_md": false 00:18:00.539 }, 00:18:00.539 "memory_domains": [ 00:18:00.539 { 00:18:00.539 "dma_device_id": "system", 00:18:00.539 "dma_device_type": 1 00:18:00.539 }, 00:18:00.539 { 00:18:00.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.539 "dma_device_type": 2 00:18:00.539 }, 00:18:00.539 { 00:18:00.539 "dma_device_id": "system", 00:18:00.539 "dma_device_type": 1 00:18:00.539 }, 00:18:00.539 { 00:18:00.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.539 "dma_device_type": 2 00:18:00.539 } 00:18:00.539 ], 00:18:00.539 "driver_specific": { 00:18:00.539 "raid": { 00:18:00.539 "uuid": "7e2ecf47-2b39-4c9b-8908-3e182898c19e", 00:18:00.539 "strip_size_kb": 0, 00:18:00.539 "state": "online", 00:18:00.539 "raid_level": "raid1", 00:18:00.539 "superblock": true, 00:18:00.539 "num_base_bdevs": 2, 00:18:00.539 "num_base_bdevs_discovered": 2, 00:18:00.539 "num_base_bdevs_operational": 2, 00:18:00.539 "base_bdevs_list": [ 00:18:00.539 { 00:18:00.539 "name": "BaseBdev1", 00:18:00.539 "uuid": "03f5b786-3da8-4535-b70c-a6d757e17682", 00:18:00.539 "is_configured": true, 00:18:00.539 "data_offset": 256, 00:18:00.539 "data_size": 7936 00:18:00.539 }, 00:18:00.539 { 00:18:00.539 "name": "BaseBdev2", 00:18:00.539 "uuid": "859f47f7-65c3-48a0-8bc7-2e073ac91419", 00:18:00.539 "is_configured": true, 00:18:00.539 "data_offset": 256, 00:18:00.539 "data_size": 7936 00:18:00.539 } 00:18:00.539 ] 00:18:00.539 } 00:18:00.539 } 00:18:00.539 }' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:00.539 BaseBdev2' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.539 10:41:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.539 [2024-11-20 10:41:03.975430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.798 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.799 "name": "Existed_Raid", 00:18:00.799 "uuid": "7e2ecf47-2b39-4c9b-8908-3e182898c19e", 00:18:00.799 "strip_size_kb": 0, 00:18:00.799 "state": "online", 00:18:00.799 "raid_level": "raid1", 00:18:00.799 "superblock": true, 00:18:00.799 "num_base_bdevs": 2, 00:18:00.799 "num_base_bdevs_discovered": 1, 00:18:00.799 "num_base_bdevs_operational": 1, 00:18:00.799 "base_bdevs_list": [ 00:18:00.799 { 00:18:00.799 "name": null, 00:18:00.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.799 "is_configured": false, 00:18:00.799 "data_offset": 0, 00:18:00.799 "data_size": 7936 00:18:00.799 }, 00:18:00.799 { 00:18:00.799 "name": "BaseBdev2", 00:18:00.799 "uuid": "859f47f7-65c3-48a0-8bc7-2e073ac91419", 00:18:00.799 "is_configured": true, 00:18:00.799 "data_offset": 256, 00:18:00.799 "data_size": 7936 00:18:00.799 } 00:18:00.799 ] 00:18:00.799 }' 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.799 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.368 [2024-11-20 10:41:04.624263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.368 [2024-11-20 10:41:04.624390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.368 [2024-11-20 10:41:04.713627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.368 [2024-11-20 10:41:04.713780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.368 [2024-11-20 10:41:04.713797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86082 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86082 ']' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86082 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86082 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86082' 00:18:01.368 killing process with pid 86082 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86082 00:18:01.368 [2024-11-20 10:41:04.796253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.368 10:41:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86082 00:18:01.368 [2024-11-20 10:41:04.811785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.745 10:41:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:02.745 00:18:02.745 real 0m5.118s 00:18:02.745 user 0m7.490s 00:18:02.745 sys 0m0.854s 00:18:02.745 10:41:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.745 10:41:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.745 ************************************ 00:18:02.745 END TEST raid_state_function_test_sb_4k 00:18:02.745 ************************************ 00:18:02.745 10:41:05 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:02.746 10:41:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:02.746 10:41:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.746 10:41:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.746 ************************************ 00:18:02.746 START TEST raid_superblock_test_4k 00:18:02.746 ************************************ 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86329 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86329 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86329 ']' 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.746 10:41:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.746 [2024-11-20 10:41:06.000421] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:02.746 [2024-11-20 10:41:06.000624] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86329 ] 00:18:02.746 [2024-11-20 10:41:06.152039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.004 [2024-11-20 10:41:06.262606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.004 [2024-11-20 10:41:06.439268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.004 [2024-11-20 10:41:06.439416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.574 malloc1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.574 [2024-11-20 10:41:06.874820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.574 [2024-11-20 10:41:06.874903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.574 [2024-11-20 10:41:06.874927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.574 [2024-11-20 10:41:06.874936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.574 [2024-11-20 10:41:06.876948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.574 [2024-11-20 10:41:06.876986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.574 pt1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.574 malloc2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.574 [2024-11-20 10:41:06.928523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.574 [2024-11-20 10:41:06.928616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.574 [2024-11-20 10:41:06.928652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:03.574 [2024-11-20 10:41:06.928699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.574 [2024-11-20 10:41:06.930682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.574 [2024-11-20 10:41:06.930745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.574 pt2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.574 [2024-11-20 10:41:06.940558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.574 [2024-11-20 10:41:06.942300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.574 [2024-11-20 10:41:06.942535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:03.574 [2024-11-20 10:41:06.942586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.574 [2024-11-20 10:41:06.942824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:03.574 [2024-11-20 10:41:06.943007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:03.574 [2024-11-20 10:41:06.943053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:03.574 [2024-11-20 10:41:06.943234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.574 "name": "raid_bdev1", 00:18:03.574 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:03.574 "strip_size_kb": 0, 00:18:03.574 "state": "online", 00:18:03.574 "raid_level": "raid1", 00:18:03.574 "superblock": true, 00:18:03.574 "num_base_bdevs": 2, 00:18:03.574 "num_base_bdevs_discovered": 2, 00:18:03.574 "num_base_bdevs_operational": 2, 00:18:03.574 "base_bdevs_list": [ 00:18:03.574 { 00:18:03.574 "name": "pt1", 00:18:03.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.574 "is_configured": true, 00:18:03.574 "data_offset": 256, 00:18:03.574 "data_size": 7936 00:18:03.574 }, 00:18:03.574 { 00:18:03.574 "name": "pt2", 00:18:03.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.574 "is_configured": true, 00:18:03.574 "data_offset": 256, 00:18:03.574 "data_size": 7936 00:18:03.574 } 00:18:03.574 ] 00:18:03.574 }' 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.574 10:41:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.144 [2024-11-20 10:41:07.400003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.144 "name": "raid_bdev1", 00:18:04.144 "aliases": [ 00:18:04.144 "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b" 00:18:04.144 ], 00:18:04.144 "product_name": "Raid Volume", 00:18:04.144 "block_size": 4096, 00:18:04.144 "num_blocks": 7936, 00:18:04.144 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:04.144 "assigned_rate_limits": { 00:18:04.144 "rw_ios_per_sec": 0, 00:18:04.144 "rw_mbytes_per_sec": 0, 00:18:04.144 "r_mbytes_per_sec": 0, 00:18:04.144 "w_mbytes_per_sec": 0 00:18:04.144 }, 00:18:04.144 "claimed": false, 00:18:04.144 "zoned": false, 00:18:04.144 "supported_io_types": { 00:18:04.144 "read": true, 00:18:04.144 "write": true, 00:18:04.144 "unmap": false, 00:18:04.144 "flush": false, 00:18:04.144 "reset": true, 00:18:04.144 "nvme_admin": false, 00:18:04.144 "nvme_io": false, 00:18:04.144 "nvme_io_md": false, 00:18:04.144 "write_zeroes": true, 00:18:04.144 "zcopy": false, 00:18:04.144 "get_zone_info": false, 00:18:04.144 "zone_management": false, 00:18:04.144 "zone_append": false, 00:18:04.144 "compare": false, 00:18:04.144 "compare_and_write": false, 00:18:04.144 "abort": false, 00:18:04.144 "seek_hole": false, 00:18:04.144 "seek_data": false, 00:18:04.144 "copy": false, 00:18:04.144 "nvme_iov_md": false 00:18:04.144 }, 00:18:04.144 "memory_domains": [ 00:18:04.144 { 00:18:04.144 "dma_device_id": "system", 00:18:04.144 "dma_device_type": 1 00:18:04.144 }, 00:18:04.144 { 00:18:04.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.144 "dma_device_type": 2 00:18:04.144 }, 00:18:04.144 { 00:18:04.144 "dma_device_id": "system", 00:18:04.144 "dma_device_type": 1 00:18:04.144 }, 00:18:04.144 { 00:18:04.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.144 "dma_device_type": 2 00:18:04.144 } 00:18:04.144 ], 00:18:04.144 "driver_specific": { 00:18:04.144 "raid": { 00:18:04.144 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:04.144 "strip_size_kb": 0, 00:18:04.144 "state": "online", 00:18:04.144 "raid_level": "raid1", 00:18:04.144 "superblock": true, 00:18:04.144 "num_base_bdevs": 2, 00:18:04.144 "num_base_bdevs_discovered": 2, 00:18:04.144 "num_base_bdevs_operational": 2, 00:18:04.144 "base_bdevs_list": [ 00:18:04.144 { 00:18:04.144 "name": "pt1", 00:18:04.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.144 "is_configured": true, 00:18:04.144 "data_offset": 256, 00:18:04.144 "data_size": 7936 00:18:04.144 }, 00:18:04.144 { 00:18:04.144 "name": "pt2", 00:18:04.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.144 "is_configured": true, 00:18:04.144 "data_offset": 256, 00:18:04.144 "data_size": 7936 00:18:04.144 } 00:18:04.144 ] 00:18:04.144 } 00:18:04.144 } 00:18:04.144 }' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:04.144 pt2' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.144 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:04.145 [2024-11-20 10:41:07.599636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.145 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b ']' 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.405 [2024-11-20 10:41:07.647289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.405 [2024-11-20 10:41:07.647311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.405 [2024-11-20 10:41:07.647392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.405 [2024-11-20 10:41:07.647445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.405 [2024-11-20 10:41:07.647466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:04.405 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.406 [2024-11-20 10:41:07.795052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:04.406 [2024-11-20 10:41:07.796843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:04.406 [2024-11-20 10:41:07.796905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:04.406 [2024-11-20 10:41:07.796969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:04.406 [2024-11-20 10:41:07.796982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.406 [2024-11-20 10:41:07.796991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:04.406 request: 00:18:04.406 { 00:18:04.406 "name": "raid_bdev1", 00:18:04.406 "raid_level": "raid1", 00:18:04.406 "base_bdevs": [ 00:18:04.406 "malloc1", 00:18:04.406 "malloc2" 00:18:04.406 ], 00:18:04.406 "superblock": false, 00:18:04.406 "method": "bdev_raid_create", 00:18:04.406 "req_id": 1 00:18:04.406 } 00:18:04.406 Got JSON-RPC error response 00:18:04.406 response: 00:18:04.406 { 00:18:04.406 "code": -17, 00:18:04.406 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:04.406 } 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.406 [2024-11-20 10:41:07.858929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.406 [2024-11-20 10:41:07.859014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.406 [2024-11-20 10:41:07.859045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:04.406 [2024-11-20 10:41:07.859073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.406 [2024-11-20 10:41:07.861088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.406 [2024-11-20 10:41:07.861162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.406 [2024-11-20 10:41:07.861253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:04.406 [2024-11-20 10:41:07.861344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.406 pt1 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.406 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.666 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.666 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.666 "name": "raid_bdev1", 00:18:04.666 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:04.666 "strip_size_kb": 0, 00:18:04.666 "state": "configuring", 00:18:04.666 "raid_level": "raid1", 00:18:04.666 "superblock": true, 00:18:04.666 "num_base_bdevs": 2, 00:18:04.666 "num_base_bdevs_discovered": 1, 00:18:04.666 "num_base_bdevs_operational": 2, 00:18:04.666 "base_bdevs_list": [ 00:18:04.666 { 00:18:04.666 "name": "pt1", 00:18:04.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.666 "is_configured": true, 00:18:04.666 "data_offset": 256, 00:18:04.666 "data_size": 7936 00:18:04.666 }, 00:18:04.666 { 00:18:04.666 "name": null, 00:18:04.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.666 "is_configured": false, 00:18:04.666 "data_offset": 256, 00:18:04.666 "data_size": 7936 00:18:04.666 } 00:18:04.666 ] 00:18:04.666 }' 00:18:04.666 10:41:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.666 10:41:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.926 [2024-11-20 10:41:08.366071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.926 [2024-11-20 10:41:08.366172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.926 [2024-11-20 10:41:08.366207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:04.926 [2024-11-20 10:41:08.366235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.926 [2024-11-20 10:41:08.366648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.926 [2024-11-20 10:41:08.366711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.926 [2024-11-20 10:41:08.366809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:04.926 [2024-11-20 10:41:08.366857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.926 [2024-11-20 10:41:08.366984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:04.926 [2024-11-20 10:41:08.367022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.926 [2024-11-20 10:41:08.367248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:04.926 [2024-11-20 10:41:08.367407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:04.926 [2024-11-20 10:41:08.367419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:04.926 [2024-11-20 10:41:08.367574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.926 pt2 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.926 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.186 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.186 "name": "raid_bdev1", 00:18:05.186 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:05.186 "strip_size_kb": 0, 00:18:05.186 "state": "online", 00:18:05.186 "raid_level": "raid1", 00:18:05.186 "superblock": true, 00:18:05.186 "num_base_bdevs": 2, 00:18:05.186 "num_base_bdevs_discovered": 2, 00:18:05.186 "num_base_bdevs_operational": 2, 00:18:05.186 "base_bdevs_list": [ 00:18:05.186 { 00:18:05.186 "name": "pt1", 00:18:05.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.186 "is_configured": true, 00:18:05.186 "data_offset": 256, 00:18:05.186 "data_size": 7936 00:18:05.186 }, 00:18:05.186 { 00:18:05.186 "name": "pt2", 00:18:05.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.186 "is_configured": true, 00:18:05.186 "data_offset": 256, 00:18:05.186 "data_size": 7936 00:18:05.186 } 00:18:05.186 ] 00:18:05.186 }' 00:18:05.186 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.186 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.446 [2024-11-20 10:41:08.809588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:05.446 "name": "raid_bdev1", 00:18:05.446 "aliases": [ 00:18:05.446 "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b" 00:18:05.446 ], 00:18:05.446 "product_name": "Raid Volume", 00:18:05.446 "block_size": 4096, 00:18:05.446 "num_blocks": 7936, 00:18:05.446 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:05.446 "assigned_rate_limits": { 00:18:05.446 "rw_ios_per_sec": 0, 00:18:05.446 "rw_mbytes_per_sec": 0, 00:18:05.446 "r_mbytes_per_sec": 0, 00:18:05.446 "w_mbytes_per_sec": 0 00:18:05.446 }, 00:18:05.446 "claimed": false, 00:18:05.446 "zoned": false, 00:18:05.446 "supported_io_types": { 00:18:05.446 "read": true, 00:18:05.446 "write": true, 00:18:05.446 "unmap": false, 00:18:05.446 "flush": false, 00:18:05.446 "reset": true, 00:18:05.446 "nvme_admin": false, 00:18:05.446 "nvme_io": false, 00:18:05.446 "nvme_io_md": false, 00:18:05.446 "write_zeroes": true, 00:18:05.446 "zcopy": false, 00:18:05.446 "get_zone_info": false, 00:18:05.446 "zone_management": false, 00:18:05.446 "zone_append": false, 00:18:05.446 "compare": false, 00:18:05.446 "compare_and_write": false, 00:18:05.446 "abort": false, 00:18:05.446 "seek_hole": false, 00:18:05.446 "seek_data": false, 00:18:05.446 "copy": false, 00:18:05.446 "nvme_iov_md": false 00:18:05.446 }, 00:18:05.446 "memory_domains": [ 00:18:05.446 { 00:18:05.446 "dma_device_id": "system", 00:18:05.446 "dma_device_type": 1 00:18:05.446 }, 00:18:05.446 { 00:18:05.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.446 "dma_device_type": 2 00:18:05.446 }, 00:18:05.446 { 00:18:05.446 "dma_device_id": "system", 00:18:05.446 "dma_device_type": 1 00:18:05.446 }, 00:18:05.446 { 00:18:05.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.446 "dma_device_type": 2 00:18:05.446 } 00:18:05.446 ], 00:18:05.446 "driver_specific": { 00:18:05.446 "raid": { 00:18:05.446 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:05.446 "strip_size_kb": 0, 00:18:05.446 "state": "online", 00:18:05.446 "raid_level": "raid1", 00:18:05.446 "superblock": true, 00:18:05.446 "num_base_bdevs": 2, 00:18:05.446 "num_base_bdevs_discovered": 2, 00:18:05.446 "num_base_bdevs_operational": 2, 00:18:05.446 "base_bdevs_list": [ 00:18:05.446 { 00:18:05.446 "name": "pt1", 00:18:05.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.446 "is_configured": true, 00:18:05.446 "data_offset": 256, 00:18:05.446 "data_size": 7936 00:18:05.446 }, 00:18:05.446 { 00:18:05.446 "name": "pt2", 00:18:05.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.446 "is_configured": true, 00:18:05.446 "data_offset": 256, 00:18:05.446 "data_size": 7936 00:18:05.446 } 00:18:05.446 ] 00:18:05.446 } 00:18:05.446 } 00:18:05.446 }' 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:05.446 pt2' 00:18:05.446 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:05.706 10:41:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.706 [2024-11-20 10:41:09.057117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b '!=' 9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b ']' 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:05.706 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.707 [2024-11-20 10:41:09.104859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.707 "name": "raid_bdev1", 00:18:05.707 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:05.707 "strip_size_kb": 0, 00:18:05.707 "state": "online", 00:18:05.707 "raid_level": "raid1", 00:18:05.707 "superblock": true, 00:18:05.707 "num_base_bdevs": 2, 00:18:05.707 "num_base_bdevs_discovered": 1, 00:18:05.707 "num_base_bdevs_operational": 1, 00:18:05.707 "base_bdevs_list": [ 00:18:05.707 { 00:18:05.707 "name": null, 00:18:05.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.707 "is_configured": false, 00:18:05.707 "data_offset": 0, 00:18:05.707 "data_size": 7936 00:18:05.707 }, 00:18:05.707 { 00:18:05.707 "name": "pt2", 00:18:05.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.707 "is_configured": true, 00:18:05.707 "data_offset": 256, 00:18:05.707 "data_size": 7936 00:18:05.707 } 00:18:05.707 ] 00:18:05.707 }' 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.707 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.275 [2024-11-20 10:41:09.584044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.275 [2024-11-20 10:41:09.584124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.275 [2024-11-20 10:41:09.584218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.275 [2024-11-20 10:41:09.584285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.275 [2024-11-20 10:41:09.584337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.275 [2024-11-20 10:41:09.655883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.275 [2024-11-20 10:41:09.655955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.275 [2024-11-20 10:41:09.655973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:06.275 [2024-11-20 10:41:09.655983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.275 [2024-11-20 10:41:09.658072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.275 [2024-11-20 10:41:09.658110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.275 [2024-11-20 10:41:09.658183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:06.275 [2024-11-20 10:41:09.658227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.275 [2024-11-20 10:41:09.658317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:06.275 [2024-11-20 10:41:09.658329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.275 [2024-11-20 10:41:09.658545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:06.275 [2024-11-20 10:41:09.658767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:06.275 [2024-11-20 10:41:09.658780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:06.275 [2024-11-20 10:41:09.658924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.275 pt2 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.275 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.275 "name": "raid_bdev1", 00:18:06.275 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:06.275 "strip_size_kb": 0, 00:18:06.275 "state": "online", 00:18:06.275 "raid_level": "raid1", 00:18:06.275 "superblock": true, 00:18:06.275 "num_base_bdevs": 2, 00:18:06.275 "num_base_bdevs_discovered": 1, 00:18:06.275 "num_base_bdevs_operational": 1, 00:18:06.275 "base_bdevs_list": [ 00:18:06.275 { 00:18:06.275 "name": null, 00:18:06.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.275 "is_configured": false, 00:18:06.275 "data_offset": 256, 00:18:06.275 "data_size": 7936 00:18:06.275 }, 00:18:06.275 { 00:18:06.275 "name": "pt2", 00:18:06.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.276 "is_configured": true, 00:18:06.276 "data_offset": 256, 00:18:06.276 "data_size": 7936 00:18:06.276 } 00:18:06.276 ] 00:18:06.276 }' 00:18:06.276 10:41:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.276 10:41:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.845 [2024-11-20 10:41:10.055213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.845 [2024-11-20 10:41:10.055280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.845 [2024-11-20 10:41:10.055365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.845 [2024-11-20 10:41:10.055430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.845 [2024-11-20 10:41:10.055470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.845 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.845 [2024-11-20 10:41:10.115134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.845 [2024-11-20 10:41:10.115219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.845 [2024-11-20 10:41:10.115251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:06.845 [2024-11-20 10:41:10.115277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.845 [2024-11-20 10:41:10.117349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.845 [2024-11-20 10:41:10.117426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.845 [2024-11-20 10:41:10.117519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:06.846 [2024-11-20 10:41:10.117594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.846 [2024-11-20 10:41:10.117754] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:06.846 [2024-11-20 10:41:10.117802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.846 [2024-11-20 10:41:10.117836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:06.846 [2024-11-20 10:41:10.117929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.846 [2024-11-20 10:41:10.118033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:06.846 [2024-11-20 10:41:10.118044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.846 [2024-11-20 10:41:10.118275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:06.846 [2024-11-20 10:41:10.118421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:06.846 [2024-11-20 10:41:10.118434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:06.846 [2024-11-20 10:41:10.118561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.846 pt1 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.846 "name": "raid_bdev1", 00:18:06.846 "uuid": "9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b", 00:18:06.846 "strip_size_kb": 0, 00:18:06.846 "state": "online", 00:18:06.846 "raid_level": "raid1", 00:18:06.846 "superblock": true, 00:18:06.846 "num_base_bdevs": 2, 00:18:06.846 "num_base_bdevs_discovered": 1, 00:18:06.846 "num_base_bdevs_operational": 1, 00:18:06.846 "base_bdevs_list": [ 00:18:06.846 { 00:18:06.846 "name": null, 00:18:06.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.846 "is_configured": false, 00:18:06.846 "data_offset": 256, 00:18:06.846 "data_size": 7936 00:18:06.846 }, 00:18:06.846 { 00:18:06.846 "name": "pt2", 00:18:06.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.846 "is_configured": true, 00:18:06.846 "data_offset": 256, 00:18:06.846 "data_size": 7936 00:18:06.846 } 00:18:06.846 ] 00:18:06.846 }' 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.846 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.105 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:07.105 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:07.105 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.105 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.105 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.366 [2024-11-20 10:41:10.610515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b '!=' 9f8cc2b2-ada8-4bb7-bef9-5fa867b5556b ']' 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86329 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86329 ']' 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86329 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86329 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86329' 00:18:07.366 killing process with pid 86329 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86329 00:18:07.366 [2024-11-20 10:41:10.689672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.366 [2024-11-20 10:41:10.689747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.366 [2024-11-20 10:41:10.689788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.366 [2024-11-20 10:41:10.689800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:07.366 10:41:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86329 00:18:07.625 [2024-11-20 10:41:10.881977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.566 10:41:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:08.566 00:18:08.566 real 0m5.991s 00:18:08.566 user 0m9.136s 00:18:08.566 sys 0m1.075s 00:18:08.566 ************************************ 00:18:08.566 END TEST raid_superblock_test_4k 00:18:08.566 ************************************ 00:18:08.566 10:41:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.566 10:41:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 10:41:11 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:08.566 10:41:11 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:08.566 10:41:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:08.567 10:41:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.567 10:41:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.567 ************************************ 00:18:08.567 START TEST raid_rebuild_test_sb_4k 00:18:08.567 ************************************ 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86657 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86657 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86657 ']' 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.567 10:41:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.826 [2024-11-20 10:41:12.072592] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:08.827 [2024-11-20 10:41:12.072802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:08.827 Zero copy mechanism will not be used. 00:18:08.827 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86657 ] 00:18:08.827 [2024-11-20 10:41:12.243641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.086 [2024-11-20 10:41:12.349725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.086 [2024-11-20 10:41:12.540732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.086 [2024-11-20 10:41:12.540849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.657 BaseBdev1_malloc 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.657 [2024-11-20 10:41:12.926614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:09.657 [2024-11-20 10:41:12.926681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.657 [2024-11-20 10:41:12.926702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:09.657 [2024-11-20 10:41:12.926712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.657 [2024-11-20 10:41:12.928729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.657 [2024-11-20 10:41:12.928769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.657 BaseBdev1 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.657 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 BaseBdev2_malloc 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 [2024-11-20 10:41:12.979820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:09.658 [2024-11-20 10:41:12.979883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.658 [2024-11-20 10:41:12.979900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:09.658 [2024-11-20 10:41:12.979911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.658 [2024-11-20 10:41:12.981884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.658 [2024-11-20 10:41:12.981996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:09.658 BaseBdev2 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 spare_malloc 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 spare_delay 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 [2024-11-20 10:41:13.051143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.658 [2024-11-20 10:41:13.051211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.658 [2024-11-20 10:41:13.051247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:09.658 [2024-11-20 10:41:13.051257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.658 [2024-11-20 10:41:13.053305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.658 [2024-11-20 10:41:13.053344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.658 spare 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 [2024-11-20 10:41:13.059183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.658 [2024-11-20 10:41:13.060881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.658 [2024-11-20 10:41:13.061047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:09.658 [2024-11-20 10:41:13.061064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.658 [2024-11-20 10:41:13.061288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:09.658 [2024-11-20 10:41:13.061463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:09.658 [2024-11-20 10:41:13.061473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:09.658 [2024-11-20 10:41:13.061605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.658 "name": "raid_bdev1", 00:18:09.658 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:09.658 "strip_size_kb": 0, 00:18:09.658 "state": "online", 00:18:09.658 "raid_level": "raid1", 00:18:09.658 "superblock": true, 00:18:09.658 "num_base_bdevs": 2, 00:18:09.658 "num_base_bdevs_discovered": 2, 00:18:09.658 "num_base_bdevs_operational": 2, 00:18:09.658 "base_bdevs_list": [ 00:18:09.658 { 00:18:09.658 "name": "BaseBdev1", 00:18:09.658 "uuid": "3d88bcaa-96e9-5407-8295-888f91bb97c1", 00:18:09.658 "is_configured": true, 00:18:09.658 "data_offset": 256, 00:18:09.658 "data_size": 7936 00:18:09.658 }, 00:18:09.658 { 00:18:09.658 "name": "BaseBdev2", 00:18:09.658 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:09.658 "is_configured": true, 00:18:09.658 "data_offset": 256, 00:18:09.658 "data_size": 7936 00:18:09.658 } 00:18:09.658 ] 00:18:09.658 }' 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.658 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:10.228 [2024-11-20 10:41:13.522679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:10.228 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:10.229 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:10.229 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:10.229 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.229 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:10.489 [2024-11-20 10:41:13.770037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:10.489 /dev/nbd0 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:10.489 1+0 records in 00:18:10.489 1+0 records out 00:18:10.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434727 s, 9.4 MB/s 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:10.489 10:41:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:11.059 7936+0 records in 00:18:11.059 7936+0 records out 00:18:11.059 32505856 bytes (33 MB, 31 MiB) copied, 0.619662 s, 52.5 MB/s 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.059 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:11.319 [2024-11-20 10:41:14.677424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.319 [2024-11-20 10:41:14.697485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.319 "name": "raid_bdev1", 00:18:11.319 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:11.319 "strip_size_kb": 0, 00:18:11.319 "state": "online", 00:18:11.319 "raid_level": "raid1", 00:18:11.319 "superblock": true, 00:18:11.319 "num_base_bdevs": 2, 00:18:11.319 "num_base_bdevs_discovered": 1, 00:18:11.319 "num_base_bdevs_operational": 1, 00:18:11.319 "base_bdevs_list": [ 00:18:11.319 { 00:18:11.319 "name": null, 00:18:11.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.319 "is_configured": false, 00:18:11.319 "data_offset": 0, 00:18:11.319 "data_size": 7936 00:18:11.319 }, 00:18:11.319 { 00:18:11.319 "name": "BaseBdev2", 00:18:11.319 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:11.319 "is_configured": true, 00:18:11.319 "data_offset": 256, 00:18:11.319 "data_size": 7936 00:18:11.319 } 00:18:11.319 ] 00:18:11.319 }' 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.319 10:41:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.889 10:41:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.889 10:41:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.889 10:41:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.889 [2024-11-20 10:41:15.136707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.889 [2024-11-20 10:41:15.152496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:11.889 10:41:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.889 10:41:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:11.889 [2024-11-20 10:41:15.154241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.829 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.829 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.829 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.829 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.829 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.829 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.830 "name": "raid_bdev1", 00:18:12.830 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:12.830 "strip_size_kb": 0, 00:18:12.830 "state": "online", 00:18:12.830 "raid_level": "raid1", 00:18:12.830 "superblock": true, 00:18:12.830 "num_base_bdevs": 2, 00:18:12.830 "num_base_bdevs_discovered": 2, 00:18:12.830 "num_base_bdevs_operational": 2, 00:18:12.830 "process": { 00:18:12.830 "type": "rebuild", 00:18:12.830 "target": "spare", 00:18:12.830 "progress": { 00:18:12.830 "blocks": 2560, 00:18:12.830 "percent": 32 00:18:12.830 } 00:18:12.830 }, 00:18:12.830 "base_bdevs_list": [ 00:18:12.830 { 00:18:12.830 "name": "spare", 00:18:12.830 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:12.830 "is_configured": true, 00:18:12.830 "data_offset": 256, 00:18:12.830 "data_size": 7936 00:18:12.830 }, 00:18:12.830 { 00:18:12.830 "name": "BaseBdev2", 00:18:12.830 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:12.830 "is_configured": true, 00:18:12.830 "data_offset": 256, 00:18:12.830 "data_size": 7936 00:18:12.830 } 00:18:12.830 ] 00:18:12.830 }' 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.830 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.830 [2024-11-20 10:41:16.301466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.090 [2024-11-20 10:41:16.358859] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.090 [2024-11-20 10:41:16.358925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.090 [2024-11-20 10:41:16.358939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.090 [2024-11-20 10:41:16.358948] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.090 "name": "raid_bdev1", 00:18:13.090 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:13.090 "strip_size_kb": 0, 00:18:13.090 "state": "online", 00:18:13.090 "raid_level": "raid1", 00:18:13.090 "superblock": true, 00:18:13.090 "num_base_bdevs": 2, 00:18:13.090 "num_base_bdevs_discovered": 1, 00:18:13.090 "num_base_bdevs_operational": 1, 00:18:13.090 "base_bdevs_list": [ 00:18:13.090 { 00:18:13.090 "name": null, 00:18:13.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.090 "is_configured": false, 00:18:13.090 "data_offset": 0, 00:18:13.090 "data_size": 7936 00:18:13.090 }, 00:18:13.090 { 00:18:13.090 "name": "BaseBdev2", 00:18:13.090 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:13.090 "is_configured": true, 00:18:13.090 "data_offset": 256, 00:18:13.090 "data_size": 7936 00:18:13.090 } 00:18:13.090 ] 00:18:13.090 }' 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.090 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.350 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.351 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.611 "name": "raid_bdev1", 00:18:13.611 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:13.611 "strip_size_kb": 0, 00:18:13.611 "state": "online", 00:18:13.611 "raid_level": "raid1", 00:18:13.611 "superblock": true, 00:18:13.611 "num_base_bdevs": 2, 00:18:13.611 "num_base_bdevs_discovered": 1, 00:18:13.611 "num_base_bdevs_operational": 1, 00:18:13.611 "base_bdevs_list": [ 00:18:13.611 { 00:18:13.611 "name": null, 00:18:13.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.611 "is_configured": false, 00:18:13.611 "data_offset": 0, 00:18:13.611 "data_size": 7936 00:18:13.611 }, 00:18:13.611 { 00:18:13.611 "name": "BaseBdev2", 00:18:13.611 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:13.611 "is_configured": true, 00:18:13.611 "data_offset": 256, 00:18:13.611 "data_size": 7936 00:18:13.611 } 00:18:13.611 ] 00:18:13.611 }' 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.611 [2024-11-20 10:41:16.961268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.611 [2024-11-20 10:41:16.976972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.611 10:41:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:13.611 [2024-11-20 10:41:16.978793] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.551 10:41:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.551 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.551 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.551 "name": "raid_bdev1", 00:18:14.551 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:14.551 "strip_size_kb": 0, 00:18:14.551 "state": "online", 00:18:14.551 "raid_level": "raid1", 00:18:14.551 "superblock": true, 00:18:14.551 "num_base_bdevs": 2, 00:18:14.551 "num_base_bdevs_discovered": 2, 00:18:14.551 "num_base_bdevs_operational": 2, 00:18:14.551 "process": { 00:18:14.551 "type": "rebuild", 00:18:14.551 "target": "spare", 00:18:14.551 "progress": { 00:18:14.551 "blocks": 2560, 00:18:14.551 "percent": 32 00:18:14.551 } 00:18:14.551 }, 00:18:14.551 "base_bdevs_list": [ 00:18:14.551 { 00:18:14.551 "name": "spare", 00:18:14.551 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:14.551 "is_configured": true, 00:18:14.551 "data_offset": 256, 00:18:14.551 "data_size": 7936 00:18:14.551 }, 00:18:14.551 { 00:18:14.551 "name": "BaseBdev2", 00:18:14.551 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:14.551 "is_configured": true, 00:18:14.551 "data_offset": 256, 00:18:14.551 "data_size": 7936 00:18:14.551 } 00:18:14.551 ] 00:18:14.551 }' 00:18:14.551 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:14.811 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=683 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.811 "name": "raid_bdev1", 00:18:14.811 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:14.811 "strip_size_kb": 0, 00:18:14.811 "state": "online", 00:18:14.811 "raid_level": "raid1", 00:18:14.811 "superblock": true, 00:18:14.811 "num_base_bdevs": 2, 00:18:14.811 "num_base_bdevs_discovered": 2, 00:18:14.811 "num_base_bdevs_operational": 2, 00:18:14.811 "process": { 00:18:14.811 "type": "rebuild", 00:18:14.811 "target": "spare", 00:18:14.811 "progress": { 00:18:14.811 "blocks": 2816, 00:18:14.811 "percent": 35 00:18:14.811 } 00:18:14.811 }, 00:18:14.811 "base_bdevs_list": [ 00:18:14.811 { 00:18:14.811 "name": "spare", 00:18:14.811 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:14.811 "is_configured": true, 00:18:14.811 "data_offset": 256, 00:18:14.811 "data_size": 7936 00:18:14.811 }, 00:18:14.811 { 00:18:14.811 "name": "BaseBdev2", 00:18:14.811 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:14.811 "is_configured": true, 00:18:14.811 "data_offset": 256, 00:18:14.811 "data_size": 7936 00:18:14.811 } 00:18:14.811 ] 00:18:14.811 }' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.811 10:41:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.192 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.193 "name": "raid_bdev1", 00:18:16.193 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:16.193 "strip_size_kb": 0, 00:18:16.193 "state": "online", 00:18:16.193 "raid_level": "raid1", 00:18:16.193 "superblock": true, 00:18:16.193 "num_base_bdevs": 2, 00:18:16.193 "num_base_bdevs_discovered": 2, 00:18:16.193 "num_base_bdevs_operational": 2, 00:18:16.193 "process": { 00:18:16.193 "type": "rebuild", 00:18:16.193 "target": "spare", 00:18:16.193 "progress": { 00:18:16.193 "blocks": 5632, 00:18:16.193 "percent": 70 00:18:16.193 } 00:18:16.193 }, 00:18:16.193 "base_bdevs_list": [ 00:18:16.193 { 00:18:16.193 "name": "spare", 00:18:16.193 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:16.193 "is_configured": true, 00:18:16.193 "data_offset": 256, 00:18:16.193 "data_size": 7936 00:18:16.193 }, 00:18:16.193 { 00:18:16.193 "name": "BaseBdev2", 00:18:16.193 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:16.193 "is_configured": true, 00:18:16.193 "data_offset": 256, 00:18:16.193 "data_size": 7936 00:18:16.193 } 00:18:16.193 ] 00:18:16.193 }' 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.193 10:41:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.763 [2024-11-20 10:41:20.090295] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:16.763 [2024-11-20 10:41:20.090433] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:16.763 [2024-11-20 10:41:20.090583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.023 "name": "raid_bdev1", 00:18:17.023 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:17.023 "strip_size_kb": 0, 00:18:17.023 "state": "online", 00:18:17.023 "raid_level": "raid1", 00:18:17.023 "superblock": true, 00:18:17.023 "num_base_bdevs": 2, 00:18:17.023 "num_base_bdevs_discovered": 2, 00:18:17.023 "num_base_bdevs_operational": 2, 00:18:17.023 "base_bdevs_list": [ 00:18:17.023 { 00:18:17.023 "name": "spare", 00:18:17.023 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:17.023 "is_configured": true, 00:18:17.023 "data_offset": 256, 00:18:17.023 "data_size": 7936 00:18:17.023 }, 00:18:17.023 { 00:18:17.023 "name": "BaseBdev2", 00:18:17.023 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:17.023 "is_configured": true, 00:18:17.023 "data_offset": 256, 00:18:17.023 "data_size": 7936 00:18:17.023 } 00:18:17.023 ] 00:18:17.023 }' 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:17.023 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.283 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.284 "name": "raid_bdev1", 00:18:17.284 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:17.284 "strip_size_kb": 0, 00:18:17.284 "state": "online", 00:18:17.284 "raid_level": "raid1", 00:18:17.284 "superblock": true, 00:18:17.284 "num_base_bdevs": 2, 00:18:17.284 "num_base_bdevs_discovered": 2, 00:18:17.284 "num_base_bdevs_operational": 2, 00:18:17.284 "base_bdevs_list": [ 00:18:17.284 { 00:18:17.284 "name": "spare", 00:18:17.284 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:17.284 "is_configured": true, 00:18:17.284 "data_offset": 256, 00:18:17.284 "data_size": 7936 00:18:17.284 }, 00:18:17.284 { 00:18:17.284 "name": "BaseBdev2", 00:18:17.284 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:17.284 "is_configured": true, 00:18:17.284 "data_offset": 256, 00:18:17.284 "data_size": 7936 00:18:17.284 } 00:18:17.284 ] 00:18:17.284 }' 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.284 "name": "raid_bdev1", 00:18:17.284 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:17.284 "strip_size_kb": 0, 00:18:17.284 "state": "online", 00:18:17.284 "raid_level": "raid1", 00:18:17.284 "superblock": true, 00:18:17.284 "num_base_bdevs": 2, 00:18:17.284 "num_base_bdevs_discovered": 2, 00:18:17.284 "num_base_bdevs_operational": 2, 00:18:17.284 "base_bdevs_list": [ 00:18:17.284 { 00:18:17.284 "name": "spare", 00:18:17.284 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:17.284 "is_configured": true, 00:18:17.284 "data_offset": 256, 00:18:17.284 "data_size": 7936 00:18:17.284 }, 00:18:17.284 { 00:18:17.284 "name": "BaseBdev2", 00:18:17.284 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:17.284 "is_configured": true, 00:18:17.284 "data_offset": 256, 00:18:17.284 "data_size": 7936 00:18:17.284 } 00:18:17.284 ] 00:18:17.284 }' 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.284 10:41:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.853 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.853 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.853 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.853 [2024-11-20 10:41:21.146192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.854 [2024-11-20 10:41:21.146221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.854 [2024-11-20 10:41:21.146296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.854 [2024-11-20 10:41:21.146375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.854 [2024-11-20 10:41:21.146387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.854 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:18.114 /dev/nbd0 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.114 1+0 records in 00:18:18.114 1+0 records out 00:18:18.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275668 s, 14.9 MB/s 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.114 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:18.374 /dev/nbd1 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.374 1+0 records in 00:18:18.374 1+0 records out 00:18:18.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408337 s, 10.0 MB/s 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.374 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.635 10:41:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.635 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.898 [2024-11-20 10:41:22.292669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.898 [2024-11-20 10:41:22.292727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.898 [2024-11-20 10:41:22.292747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:18.898 [2024-11-20 10:41:22.292757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.898 [2024-11-20 10:41:22.294927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.898 [2024-11-20 10:41:22.295017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.898 [2024-11-20 10:41:22.295131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:18.898 [2024-11-20 10:41:22.295203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.898 [2024-11-20 10:41:22.295434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.898 spare 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.898 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.164 [2024-11-20 10:41:22.395391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:19.164 [2024-11-20 10:41:22.395453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:19.164 [2024-11-20 10:41:22.395748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:19.164 [2024-11-20 10:41:22.395915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:19.164 [2024-11-20 10:41:22.395926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:19.164 [2024-11-20 10:41:22.396098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.164 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.165 "name": "raid_bdev1", 00:18:19.165 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:19.165 "strip_size_kb": 0, 00:18:19.165 "state": "online", 00:18:19.165 "raid_level": "raid1", 00:18:19.165 "superblock": true, 00:18:19.165 "num_base_bdevs": 2, 00:18:19.165 "num_base_bdevs_discovered": 2, 00:18:19.165 "num_base_bdevs_operational": 2, 00:18:19.165 "base_bdevs_list": [ 00:18:19.165 { 00:18:19.165 "name": "spare", 00:18:19.165 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:19.165 "is_configured": true, 00:18:19.165 "data_offset": 256, 00:18:19.165 "data_size": 7936 00:18:19.165 }, 00:18:19.165 { 00:18:19.165 "name": "BaseBdev2", 00:18:19.165 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:19.165 "is_configured": true, 00:18:19.165 "data_offset": 256, 00:18:19.165 "data_size": 7936 00:18:19.165 } 00:18:19.165 ] 00:18:19.165 }' 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.165 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.426 "name": "raid_bdev1", 00:18:19.426 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:19.426 "strip_size_kb": 0, 00:18:19.426 "state": "online", 00:18:19.426 "raid_level": "raid1", 00:18:19.426 "superblock": true, 00:18:19.426 "num_base_bdevs": 2, 00:18:19.426 "num_base_bdevs_discovered": 2, 00:18:19.426 "num_base_bdevs_operational": 2, 00:18:19.426 "base_bdevs_list": [ 00:18:19.426 { 00:18:19.426 "name": "spare", 00:18:19.426 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:19.426 "is_configured": true, 00:18:19.426 "data_offset": 256, 00:18:19.426 "data_size": 7936 00:18:19.426 }, 00:18:19.426 { 00:18:19.426 "name": "BaseBdev2", 00:18:19.426 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:19.426 "is_configured": true, 00:18:19.426 "data_offset": 256, 00:18:19.426 "data_size": 7936 00:18:19.426 } 00:18:19.426 ] 00:18:19.426 }' 00:18:19.426 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.686 10:41:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.686 [2024-11-20 10:41:23.043439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.686 "name": "raid_bdev1", 00:18:19.686 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:19.686 "strip_size_kb": 0, 00:18:19.686 "state": "online", 00:18:19.686 "raid_level": "raid1", 00:18:19.686 "superblock": true, 00:18:19.686 "num_base_bdevs": 2, 00:18:19.686 "num_base_bdevs_discovered": 1, 00:18:19.686 "num_base_bdevs_operational": 1, 00:18:19.686 "base_bdevs_list": [ 00:18:19.686 { 00:18:19.686 "name": null, 00:18:19.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.686 "is_configured": false, 00:18:19.686 "data_offset": 0, 00:18:19.686 "data_size": 7936 00:18:19.686 }, 00:18:19.686 { 00:18:19.686 "name": "BaseBdev2", 00:18:19.686 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:19.686 "is_configured": true, 00:18:19.686 "data_offset": 256, 00:18:19.686 "data_size": 7936 00:18:19.686 } 00:18:19.686 ] 00:18:19.686 }' 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.686 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.256 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.256 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.256 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.256 [2024-11-20 10:41:23.446756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.256 [2024-11-20 10:41:23.446993] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.257 [2024-11-20 10:41:23.447061] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.257 [2024-11-20 10:41:23.447124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.257 [2024-11-20 10:41:23.462796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:20.257 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.257 10:41:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:20.257 [2024-11-20 10:41:23.464646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.196 "name": "raid_bdev1", 00:18:21.196 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:21.196 "strip_size_kb": 0, 00:18:21.196 "state": "online", 00:18:21.196 "raid_level": "raid1", 00:18:21.196 "superblock": true, 00:18:21.196 "num_base_bdevs": 2, 00:18:21.196 "num_base_bdevs_discovered": 2, 00:18:21.196 "num_base_bdevs_operational": 2, 00:18:21.196 "process": { 00:18:21.196 "type": "rebuild", 00:18:21.196 "target": "spare", 00:18:21.196 "progress": { 00:18:21.196 "blocks": 2560, 00:18:21.196 "percent": 32 00:18:21.196 } 00:18:21.196 }, 00:18:21.196 "base_bdevs_list": [ 00:18:21.196 { 00:18:21.196 "name": "spare", 00:18:21.196 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:21.196 "is_configured": true, 00:18:21.196 "data_offset": 256, 00:18:21.196 "data_size": 7936 00:18:21.196 }, 00:18:21.196 { 00:18:21.196 "name": "BaseBdev2", 00:18:21.196 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:21.196 "is_configured": true, 00:18:21.196 "data_offset": 256, 00:18:21.196 "data_size": 7936 00:18:21.196 } 00:18:21.196 ] 00:18:21.196 }' 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.196 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.196 [2024-11-20 10:41:24.627771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.196 [2024-11-20 10:41:24.669206] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.196 [2024-11-20 10:41:24.669265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.196 [2024-11-20 10:41:24.669279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.196 [2024-11-20 10:41:24.669287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.456 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.456 "name": "raid_bdev1", 00:18:21.456 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:21.456 "strip_size_kb": 0, 00:18:21.456 "state": "online", 00:18:21.456 "raid_level": "raid1", 00:18:21.456 "superblock": true, 00:18:21.456 "num_base_bdevs": 2, 00:18:21.456 "num_base_bdevs_discovered": 1, 00:18:21.456 "num_base_bdevs_operational": 1, 00:18:21.456 "base_bdevs_list": [ 00:18:21.456 { 00:18:21.456 "name": null, 00:18:21.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.456 "is_configured": false, 00:18:21.456 "data_offset": 0, 00:18:21.456 "data_size": 7936 00:18:21.456 }, 00:18:21.456 { 00:18:21.456 "name": "BaseBdev2", 00:18:21.456 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:21.456 "is_configured": true, 00:18:21.456 "data_offset": 256, 00:18:21.456 "data_size": 7936 00:18:21.456 } 00:18:21.456 ] 00:18:21.456 }' 00:18:21.457 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.457 10:41:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.717 10:41:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:21.717 10:41:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.717 10:41:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.717 [2024-11-20 10:41:25.145493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:21.717 [2024-11-20 10:41:25.145595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.717 [2024-11-20 10:41:25.145630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:21.717 [2024-11-20 10:41:25.145687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.717 [2024-11-20 10:41:25.146159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.717 [2024-11-20 10:41:25.146220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:21.717 [2024-11-20 10:41:25.146328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:21.717 [2024-11-20 10:41:25.146386] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.717 [2024-11-20 10:41:25.146426] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:21.717 [2024-11-20 10:41:25.146488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.717 [2024-11-20 10:41:25.161717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:21.717 spare 00:18:21.717 10:41:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.717 [2024-11-20 10:41:25.163542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.717 10:41:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.098 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.098 "name": "raid_bdev1", 00:18:23.098 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:23.098 "strip_size_kb": 0, 00:18:23.098 "state": "online", 00:18:23.098 "raid_level": "raid1", 00:18:23.098 "superblock": true, 00:18:23.098 "num_base_bdevs": 2, 00:18:23.098 "num_base_bdevs_discovered": 2, 00:18:23.098 "num_base_bdevs_operational": 2, 00:18:23.098 "process": { 00:18:23.098 "type": "rebuild", 00:18:23.098 "target": "spare", 00:18:23.099 "progress": { 00:18:23.099 "blocks": 2560, 00:18:23.099 "percent": 32 00:18:23.099 } 00:18:23.099 }, 00:18:23.099 "base_bdevs_list": [ 00:18:23.099 { 00:18:23.099 "name": "spare", 00:18:23.099 "uuid": "c81ccb62-977a-59ab-95b1-a77cda9fa119", 00:18:23.099 "is_configured": true, 00:18:23.099 "data_offset": 256, 00:18:23.099 "data_size": 7936 00:18:23.099 }, 00:18:23.099 { 00:18:23.099 "name": "BaseBdev2", 00:18:23.099 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:23.099 "is_configured": true, 00:18:23.099 "data_offset": 256, 00:18:23.099 "data_size": 7936 00:18:23.099 } 00:18:23.099 ] 00:18:23.099 }' 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.099 [2024-11-20 10:41:26.323720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.099 [2024-11-20 10:41:26.368067] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.099 [2024-11-20 10:41:26.368119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.099 [2024-11-20 10:41:26.368152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.099 [2024-11-20 10:41:26.368159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.099 "name": "raid_bdev1", 00:18:23.099 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:23.099 "strip_size_kb": 0, 00:18:23.099 "state": "online", 00:18:23.099 "raid_level": "raid1", 00:18:23.099 "superblock": true, 00:18:23.099 "num_base_bdevs": 2, 00:18:23.099 "num_base_bdevs_discovered": 1, 00:18:23.099 "num_base_bdevs_operational": 1, 00:18:23.099 "base_bdevs_list": [ 00:18:23.099 { 00:18:23.099 "name": null, 00:18:23.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.099 "is_configured": false, 00:18:23.099 "data_offset": 0, 00:18:23.099 "data_size": 7936 00:18:23.099 }, 00:18:23.099 { 00:18:23.099 "name": "BaseBdev2", 00:18:23.099 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:23.099 "is_configured": true, 00:18:23.099 "data_offset": 256, 00:18:23.099 "data_size": 7936 00:18:23.099 } 00:18:23.099 ] 00:18:23.099 }' 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.099 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.670 "name": "raid_bdev1", 00:18:23.670 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:23.670 "strip_size_kb": 0, 00:18:23.670 "state": "online", 00:18:23.670 "raid_level": "raid1", 00:18:23.670 "superblock": true, 00:18:23.670 "num_base_bdevs": 2, 00:18:23.670 "num_base_bdevs_discovered": 1, 00:18:23.670 "num_base_bdevs_operational": 1, 00:18:23.670 "base_bdevs_list": [ 00:18:23.670 { 00:18:23.670 "name": null, 00:18:23.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.670 "is_configured": false, 00:18:23.670 "data_offset": 0, 00:18:23.670 "data_size": 7936 00:18:23.670 }, 00:18:23.670 { 00:18:23.670 "name": "BaseBdev2", 00:18:23.670 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:23.670 "is_configured": true, 00:18:23.670 "data_offset": 256, 00:18:23.670 "data_size": 7936 00:18:23.670 } 00:18:23.670 ] 00:18:23.670 }' 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.670 10:41:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.670 10:41:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.670 10:41:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:23.670 10:41:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.670 10:41:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.670 [2024-11-20 10:41:27.004525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:23.670 [2024-11-20 10:41:27.004580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.670 [2024-11-20 10:41:27.004601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:23.670 [2024-11-20 10:41:27.004619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.670 [2024-11-20 10:41:27.005041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.670 [2024-11-20 10:41:27.005057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.670 [2024-11-20 10:41:27.005129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:23.670 [2024-11-20 10:41:27.005143] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.670 [2024-11-20 10:41:27.005154] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:23.670 [2024-11-20 10:41:27.005164] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:23.670 BaseBdev1 00:18:23.670 10:41:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.670 10:41:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.610 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.610 "name": "raid_bdev1", 00:18:24.610 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:24.610 "strip_size_kb": 0, 00:18:24.610 "state": "online", 00:18:24.610 "raid_level": "raid1", 00:18:24.610 "superblock": true, 00:18:24.610 "num_base_bdevs": 2, 00:18:24.610 "num_base_bdevs_discovered": 1, 00:18:24.610 "num_base_bdevs_operational": 1, 00:18:24.610 "base_bdevs_list": [ 00:18:24.610 { 00:18:24.611 "name": null, 00:18:24.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.611 "is_configured": false, 00:18:24.611 "data_offset": 0, 00:18:24.611 "data_size": 7936 00:18:24.611 }, 00:18:24.611 { 00:18:24.611 "name": "BaseBdev2", 00:18:24.611 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:24.611 "is_configured": true, 00:18:24.611 "data_offset": 256, 00:18:24.611 "data_size": 7936 00:18:24.611 } 00:18:24.611 ] 00:18:24.611 }' 00:18:24.611 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.611 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.181 "name": "raid_bdev1", 00:18:25.181 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:25.181 "strip_size_kb": 0, 00:18:25.181 "state": "online", 00:18:25.181 "raid_level": "raid1", 00:18:25.181 "superblock": true, 00:18:25.181 "num_base_bdevs": 2, 00:18:25.181 "num_base_bdevs_discovered": 1, 00:18:25.181 "num_base_bdevs_operational": 1, 00:18:25.181 "base_bdevs_list": [ 00:18:25.181 { 00:18:25.181 "name": null, 00:18:25.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.181 "is_configured": false, 00:18:25.181 "data_offset": 0, 00:18:25.181 "data_size": 7936 00:18:25.181 }, 00:18:25.181 { 00:18:25.181 "name": "BaseBdev2", 00:18:25.181 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:25.181 "is_configured": true, 00:18:25.181 "data_offset": 256, 00:18:25.181 "data_size": 7936 00:18:25.181 } 00:18:25.181 ] 00:18:25.181 }' 00:18:25.181 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.182 [2024-11-20 10:41:28.553912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.182 [2024-11-20 10:41:28.554072] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:25.182 [2024-11-20 10:41:28.554089] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:25.182 request: 00:18:25.182 { 00:18:25.182 "base_bdev": "BaseBdev1", 00:18:25.182 "raid_bdev": "raid_bdev1", 00:18:25.182 "method": "bdev_raid_add_base_bdev", 00:18:25.182 "req_id": 1 00:18:25.182 } 00:18:25.182 Got JSON-RPC error response 00:18:25.182 response: 00:18:25.182 { 00:18:25.182 "code": -22, 00:18:25.182 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:25.182 } 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.182 10:41:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.123 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.382 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.382 "name": "raid_bdev1", 00:18:26.382 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:26.382 "strip_size_kb": 0, 00:18:26.382 "state": "online", 00:18:26.382 "raid_level": "raid1", 00:18:26.382 "superblock": true, 00:18:26.382 "num_base_bdevs": 2, 00:18:26.382 "num_base_bdevs_discovered": 1, 00:18:26.382 "num_base_bdevs_operational": 1, 00:18:26.382 "base_bdevs_list": [ 00:18:26.382 { 00:18:26.382 "name": null, 00:18:26.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.382 "is_configured": false, 00:18:26.382 "data_offset": 0, 00:18:26.382 "data_size": 7936 00:18:26.382 }, 00:18:26.382 { 00:18:26.382 "name": "BaseBdev2", 00:18:26.382 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:26.382 "is_configured": true, 00:18:26.382 "data_offset": 256, 00:18:26.382 "data_size": 7936 00:18:26.382 } 00:18:26.382 ] 00:18:26.382 }' 00:18:26.382 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.383 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.642 10:41:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.642 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.642 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.642 "name": "raid_bdev1", 00:18:26.642 "uuid": "81e54f47-a4f3-4f9a-b128-246e03177e28", 00:18:26.642 "strip_size_kb": 0, 00:18:26.642 "state": "online", 00:18:26.642 "raid_level": "raid1", 00:18:26.642 "superblock": true, 00:18:26.642 "num_base_bdevs": 2, 00:18:26.642 "num_base_bdevs_discovered": 1, 00:18:26.642 "num_base_bdevs_operational": 1, 00:18:26.642 "base_bdevs_list": [ 00:18:26.642 { 00:18:26.642 "name": null, 00:18:26.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.642 "is_configured": false, 00:18:26.642 "data_offset": 0, 00:18:26.642 "data_size": 7936 00:18:26.642 }, 00:18:26.642 { 00:18:26.642 "name": "BaseBdev2", 00:18:26.642 "uuid": "83cf9a74-1c8e-58eb-b725-0a968999dd0c", 00:18:26.642 "is_configured": true, 00:18:26.642 "data_offset": 256, 00:18:26.642 "data_size": 7936 00:18:26.642 } 00:18:26.642 ] 00:18:26.642 }' 00:18:26.642 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.642 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.642 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86657 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86657 ']' 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86657 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86657 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86657' 00:18:26.902 killing process with pid 86657 00:18:26.902 Received shutdown signal, test time was about 60.000000 seconds 00:18:26.902 00:18:26.902 Latency(us) 00:18:26.902 [2024-11-20T10:41:30.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.902 [2024-11-20T10:41:30.381Z] =================================================================================================================== 00:18:26.902 [2024-11-20T10:41:30.381Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86657 00:18:26.902 [2024-11-20 10:41:30.180431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.902 [2024-11-20 10:41:30.180555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.902 [2024-11-20 10:41:30.180603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.902 [2024-11-20 10:41:30.180614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:26.902 10:41:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86657 00:18:27.162 [2024-11-20 10:41:30.457847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.103 10:41:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:28.103 00:18:28.103 real 0m19.498s 00:18:28.103 user 0m25.424s 00:18:28.103 sys 0m2.602s 00:18:28.103 10:41:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.103 10:41:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.103 ************************************ 00:18:28.103 END TEST raid_rebuild_test_sb_4k 00:18:28.103 ************************************ 00:18:28.103 10:41:31 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:28.103 10:41:31 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:28.103 10:41:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:28.103 10:41:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.103 10:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.103 ************************************ 00:18:28.103 START TEST raid_state_function_test_sb_md_separate 00:18:28.103 ************************************ 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87343 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87343' 00:18:28.103 Process raid pid: 87343 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87343 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87343 ']' 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.103 10:41:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.371 [2024-11-20 10:41:31.650406] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:28.371 [2024-11-20 10:41:31.650612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.371 [2024-11-20 10:41:31.822708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.648 [2024-11-20 10:41:31.932235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.907 [2024-11-20 10:41:32.127902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.907 [2024-11-20 10:41:32.128014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.167 [2024-11-20 10:41:32.471011] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:29.167 [2024-11-20 10:41:32.471116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:29.167 [2024-11-20 10:41:32.471146] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.167 [2024-11-20 10:41:32.471156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.167 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.167 "name": "Existed_Raid", 00:18:29.167 "uuid": "841c1b65-4e5e-4694-ab84-9404bec41f98", 00:18:29.167 "strip_size_kb": 0, 00:18:29.167 "state": "configuring", 00:18:29.167 "raid_level": "raid1", 00:18:29.167 "superblock": true, 00:18:29.167 "num_base_bdevs": 2, 00:18:29.167 "num_base_bdevs_discovered": 0, 00:18:29.167 "num_base_bdevs_operational": 2, 00:18:29.167 "base_bdevs_list": [ 00:18:29.167 { 00:18:29.167 "name": "BaseBdev1", 00:18:29.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.167 "is_configured": false, 00:18:29.167 "data_offset": 0, 00:18:29.167 "data_size": 0 00:18:29.167 }, 00:18:29.167 { 00:18:29.167 "name": "BaseBdev2", 00:18:29.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.167 "is_configured": false, 00:18:29.167 "data_offset": 0, 00:18:29.167 "data_size": 0 00:18:29.168 } 00:18:29.168 ] 00:18:29.168 }' 00:18:29.168 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.168 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.738 [2024-11-20 10:41:32.918170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.738 [2024-11-20 10:41:32.918246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.738 [2024-11-20 10:41:32.930156] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:29.738 [2024-11-20 10:41:32.930230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:29.738 [2024-11-20 10:41:32.930271] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.738 [2024-11-20 10:41:32.930295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.738 [2024-11-20 10:41:32.977039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.738 BaseBdev1 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.738 10:41:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.738 [ 00:18:29.738 { 00:18:29.738 "name": "BaseBdev1", 00:18:29.738 "aliases": [ 00:18:29.738 "ac8fa5b9-deee-4bac-a67d-65241ce33707" 00:18:29.738 ], 00:18:29.738 "product_name": "Malloc disk", 00:18:29.738 "block_size": 4096, 00:18:29.738 "num_blocks": 8192, 00:18:29.738 "uuid": "ac8fa5b9-deee-4bac-a67d-65241ce33707", 00:18:29.738 "md_size": 32, 00:18:29.738 "md_interleave": false, 00:18:29.738 "dif_type": 0, 00:18:29.738 "assigned_rate_limits": { 00:18:29.738 "rw_ios_per_sec": 0, 00:18:29.738 "rw_mbytes_per_sec": 0, 00:18:29.738 "r_mbytes_per_sec": 0, 00:18:29.738 "w_mbytes_per_sec": 0 00:18:29.738 }, 00:18:29.738 "claimed": true, 00:18:29.738 "claim_type": "exclusive_write", 00:18:29.738 "zoned": false, 00:18:29.738 "supported_io_types": { 00:18:29.738 "read": true, 00:18:29.738 "write": true, 00:18:29.738 "unmap": true, 00:18:29.738 "flush": true, 00:18:29.738 "reset": true, 00:18:29.738 "nvme_admin": false, 00:18:29.738 "nvme_io": false, 00:18:29.738 "nvme_io_md": false, 00:18:29.738 "write_zeroes": true, 00:18:29.738 "zcopy": true, 00:18:29.738 "get_zone_info": false, 00:18:29.738 "zone_management": false, 00:18:29.738 "zone_append": false, 00:18:29.738 "compare": false, 00:18:29.738 "compare_and_write": false, 00:18:29.738 "abort": true, 00:18:29.738 "seek_hole": false, 00:18:29.738 "seek_data": false, 00:18:29.738 "copy": true, 00:18:29.738 "nvme_iov_md": false 00:18:29.738 }, 00:18:29.738 "memory_domains": [ 00:18:29.738 { 00:18:29.738 "dma_device_id": "system", 00:18:29.738 "dma_device_type": 1 00:18:29.738 }, 00:18:29.738 { 00:18:29.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.738 "dma_device_type": 2 00:18:29.738 } 00:18:29.738 ], 00:18:29.738 "driver_specific": {} 00:18:29.738 } 00:18:29.738 ] 00:18:29.738 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.738 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:29.738 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:29.738 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.738 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.738 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.739 "name": "Existed_Raid", 00:18:29.739 "uuid": "170ec819-5c47-48e5-a20f-450c20f2ae7c", 00:18:29.739 "strip_size_kb": 0, 00:18:29.739 "state": "configuring", 00:18:29.739 "raid_level": "raid1", 00:18:29.739 "superblock": true, 00:18:29.739 "num_base_bdevs": 2, 00:18:29.739 "num_base_bdevs_discovered": 1, 00:18:29.739 "num_base_bdevs_operational": 2, 00:18:29.739 "base_bdevs_list": [ 00:18:29.739 { 00:18:29.739 "name": "BaseBdev1", 00:18:29.739 "uuid": "ac8fa5b9-deee-4bac-a67d-65241ce33707", 00:18:29.739 "is_configured": true, 00:18:29.739 "data_offset": 256, 00:18:29.739 "data_size": 7936 00:18:29.739 }, 00:18:29.739 { 00:18:29.739 "name": "BaseBdev2", 00:18:29.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.739 "is_configured": false, 00:18:29.739 "data_offset": 0, 00:18:29.739 "data_size": 0 00:18:29.739 } 00:18:29.739 ] 00:18:29.739 }' 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.739 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.999 [2024-11-20 10:41:33.428318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.999 [2024-11-20 10:41:33.428373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.999 [2024-11-20 10:41:33.440407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.999 [2024-11-20 10:41:33.442128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.999 [2024-11-20 10:41:33.442167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.999 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.259 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.259 "name": "Existed_Raid", 00:18:30.259 "uuid": "453d64ed-da0e-48cf-9b03-d36480aa2703", 00:18:30.259 "strip_size_kb": 0, 00:18:30.259 "state": "configuring", 00:18:30.259 "raid_level": "raid1", 00:18:30.259 "superblock": true, 00:18:30.259 "num_base_bdevs": 2, 00:18:30.259 "num_base_bdevs_discovered": 1, 00:18:30.259 "num_base_bdevs_operational": 2, 00:18:30.259 "base_bdevs_list": [ 00:18:30.259 { 00:18:30.259 "name": "BaseBdev1", 00:18:30.259 "uuid": "ac8fa5b9-deee-4bac-a67d-65241ce33707", 00:18:30.259 "is_configured": true, 00:18:30.259 "data_offset": 256, 00:18:30.259 "data_size": 7936 00:18:30.259 }, 00:18:30.259 { 00:18:30.259 "name": "BaseBdev2", 00:18:30.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.259 "is_configured": false, 00:18:30.259 "data_offset": 0, 00:18:30.259 "data_size": 0 00:18:30.259 } 00:18:30.259 ] 00:18:30.259 }' 00:18:30.259 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.259 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.520 [2024-11-20 10:41:33.945284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.520 [2024-11-20 10:41:33.945652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:30.520 [2024-11-20 10:41:33.945710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:30.520 [2024-11-20 10:41:33.945823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:30.520 [2024-11-20 10:41:33.945975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:30.520 [2024-11-20 10:41:33.946015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:30.520 [2024-11-20 10:41:33.946164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.520 BaseBdev2 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.520 [ 00:18:30.520 { 00:18:30.520 "name": "BaseBdev2", 00:18:30.520 "aliases": [ 00:18:30.520 "137fbf7d-6eed-4d6d-81cf-591f4bd5c200" 00:18:30.520 ], 00:18:30.520 "product_name": "Malloc disk", 00:18:30.520 "block_size": 4096, 00:18:30.520 "num_blocks": 8192, 00:18:30.520 "uuid": "137fbf7d-6eed-4d6d-81cf-591f4bd5c200", 00:18:30.520 "md_size": 32, 00:18:30.520 "md_interleave": false, 00:18:30.520 "dif_type": 0, 00:18:30.520 "assigned_rate_limits": { 00:18:30.520 "rw_ios_per_sec": 0, 00:18:30.520 "rw_mbytes_per_sec": 0, 00:18:30.520 "r_mbytes_per_sec": 0, 00:18:30.520 "w_mbytes_per_sec": 0 00:18:30.520 }, 00:18:30.520 "claimed": true, 00:18:30.520 "claim_type": "exclusive_write", 00:18:30.520 "zoned": false, 00:18:30.520 "supported_io_types": { 00:18:30.520 "read": true, 00:18:30.520 "write": true, 00:18:30.520 "unmap": true, 00:18:30.520 "flush": true, 00:18:30.520 "reset": true, 00:18:30.520 "nvme_admin": false, 00:18:30.520 "nvme_io": false, 00:18:30.520 "nvme_io_md": false, 00:18:30.520 "write_zeroes": true, 00:18:30.520 "zcopy": true, 00:18:30.520 "get_zone_info": false, 00:18:30.520 "zone_management": false, 00:18:30.520 "zone_append": false, 00:18:30.520 "compare": false, 00:18:30.520 "compare_and_write": false, 00:18:30.520 "abort": true, 00:18:30.520 "seek_hole": false, 00:18:30.520 "seek_data": false, 00:18:30.520 "copy": true, 00:18:30.520 "nvme_iov_md": false 00:18:30.520 }, 00:18:30.520 "memory_domains": [ 00:18:30.520 { 00:18:30.520 "dma_device_id": "system", 00:18:30.520 "dma_device_type": 1 00:18:30.520 }, 00:18:30.520 { 00:18:30.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.520 "dma_device_type": 2 00:18:30.520 } 00:18:30.520 ], 00:18:30.520 "driver_specific": {} 00:18:30.520 } 00:18:30.520 ] 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.520 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.521 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.780 10:41:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.780 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.780 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.780 "name": "Existed_Raid", 00:18:30.780 "uuid": "453d64ed-da0e-48cf-9b03-d36480aa2703", 00:18:30.780 "strip_size_kb": 0, 00:18:30.780 "state": "online", 00:18:30.780 "raid_level": "raid1", 00:18:30.780 "superblock": true, 00:18:30.780 "num_base_bdevs": 2, 00:18:30.780 "num_base_bdevs_discovered": 2, 00:18:30.780 "num_base_bdevs_operational": 2, 00:18:30.780 "base_bdevs_list": [ 00:18:30.780 { 00:18:30.780 "name": "BaseBdev1", 00:18:30.780 "uuid": "ac8fa5b9-deee-4bac-a67d-65241ce33707", 00:18:30.780 "is_configured": true, 00:18:30.780 "data_offset": 256, 00:18:30.780 "data_size": 7936 00:18:30.780 }, 00:18:30.780 { 00:18:30.780 "name": "BaseBdev2", 00:18:30.780 "uuid": "137fbf7d-6eed-4d6d-81cf-591f4bd5c200", 00:18:30.780 "is_configured": true, 00:18:30.780 "data_offset": 256, 00:18:30.780 "data_size": 7936 00:18:30.780 } 00:18:30.780 ] 00:18:30.780 }' 00:18:30.780 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.780 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.039 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:31.039 [2024-11-20 10:41:34.412776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.040 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.040 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:31.040 "name": "Existed_Raid", 00:18:31.040 "aliases": [ 00:18:31.040 "453d64ed-da0e-48cf-9b03-d36480aa2703" 00:18:31.040 ], 00:18:31.040 "product_name": "Raid Volume", 00:18:31.040 "block_size": 4096, 00:18:31.040 "num_blocks": 7936, 00:18:31.040 "uuid": "453d64ed-da0e-48cf-9b03-d36480aa2703", 00:18:31.040 "md_size": 32, 00:18:31.040 "md_interleave": false, 00:18:31.040 "dif_type": 0, 00:18:31.040 "assigned_rate_limits": { 00:18:31.040 "rw_ios_per_sec": 0, 00:18:31.040 "rw_mbytes_per_sec": 0, 00:18:31.040 "r_mbytes_per_sec": 0, 00:18:31.040 "w_mbytes_per_sec": 0 00:18:31.040 }, 00:18:31.040 "claimed": false, 00:18:31.040 "zoned": false, 00:18:31.040 "supported_io_types": { 00:18:31.040 "read": true, 00:18:31.040 "write": true, 00:18:31.040 "unmap": false, 00:18:31.040 "flush": false, 00:18:31.040 "reset": true, 00:18:31.040 "nvme_admin": false, 00:18:31.040 "nvme_io": false, 00:18:31.040 "nvme_io_md": false, 00:18:31.040 "write_zeroes": true, 00:18:31.040 "zcopy": false, 00:18:31.040 "get_zone_info": false, 00:18:31.040 "zone_management": false, 00:18:31.040 "zone_append": false, 00:18:31.040 "compare": false, 00:18:31.040 "compare_and_write": false, 00:18:31.040 "abort": false, 00:18:31.040 "seek_hole": false, 00:18:31.040 "seek_data": false, 00:18:31.040 "copy": false, 00:18:31.040 "nvme_iov_md": false 00:18:31.040 }, 00:18:31.040 "memory_domains": [ 00:18:31.040 { 00:18:31.040 "dma_device_id": "system", 00:18:31.040 "dma_device_type": 1 00:18:31.040 }, 00:18:31.040 { 00:18:31.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.040 "dma_device_type": 2 00:18:31.040 }, 00:18:31.040 { 00:18:31.040 "dma_device_id": "system", 00:18:31.040 "dma_device_type": 1 00:18:31.040 }, 00:18:31.040 { 00:18:31.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.040 "dma_device_type": 2 00:18:31.040 } 00:18:31.040 ], 00:18:31.040 "driver_specific": { 00:18:31.040 "raid": { 00:18:31.040 "uuid": "453d64ed-da0e-48cf-9b03-d36480aa2703", 00:18:31.040 "strip_size_kb": 0, 00:18:31.040 "state": "online", 00:18:31.040 "raid_level": "raid1", 00:18:31.040 "superblock": true, 00:18:31.040 "num_base_bdevs": 2, 00:18:31.040 "num_base_bdevs_discovered": 2, 00:18:31.040 "num_base_bdevs_operational": 2, 00:18:31.040 "base_bdevs_list": [ 00:18:31.040 { 00:18:31.040 "name": "BaseBdev1", 00:18:31.040 "uuid": "ac8fa5b9-deee-4bac-a67d-65241ce33707", 00:18:31.040 "is_configured": true, 00:18:31.040 "data_offset": 256, 00:18:31.040 "data_size": 7936 00:18:31.040 }, 00:18:31.040 { 00:18:31.040 "name": "BaseBdev2", 00:18:31.040 "uuid": "137fbf7d-6eed-4d6d-81cf-591f4bd5c200", 00:18:31.040 "is_configured": true, 00:18:31.040 "data_offset": 256, 00:18:31.040 "data_size": 7936 00:18:31.040 } 00:18:31.040 ] 00:18:31.040 } 00:18:31.040 } 00:18:31.040 }' 00:18:31.040 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.040 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:31.040 BaseBdev2' 00:18:31.040 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.300 [2024-11-20 10:41:34.620219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.300 "name": "Existed_Raid", 00:18:31.300 "uuid": "453d64ed-da0e-48cf-9b03-d36480aa2703", 00:18:31.300 "strip_size_kb": 0, 00:18:31.300 "state": "online", 00:18:31.300 "raid_level": "raid1", 00:18:31.300 "superblock": true, 00:18:31.300 "num_base_bdevs": 2, 00:18:31.300 "num_base_bdevs_discovered": 1, 00:18:31.300 "num_base_bdevs_operational": 1, 00:18:31.300 "base_bdevs_list": [ 00:18:31.300 { 00:18:31.300 "name": null, 00:18:31.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.300 "is_configured": false, 00:18:31.300 "data_offset": 0, 00:18:31.300 "data_size": 7936 00:18:31.300 }, 00:18:31.300 { 00:18:31.300 "name": "BaseBdev2", 00:18:31.300 "uuid": "137fbf7d-6eed-4d6d-81cf-591f4bd5c200", 00:18:31.300 "is_configured": true, 00:18:31.300 "data_offset": 256, 00:18:31.300 "data_size": 7936 00:18:31.300 } 00:18:31.300 ] 00:18:31.300 }' 00:18:31.300 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.560 10:41:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.820 [2024-11-20 10:41:35.177277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.820 [2024-11-20 10:41:35.177391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.820 [2024-11-20 10:41:35.272219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.820 [2024-11-20 10:41:35.272334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.820 [2024-11-20 10:41:35.272389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.820 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87343 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87343 ']' 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87343 00:18:32.079 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87343 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87343' 00:18:32.080 killing process with pid 87343 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87343 00:18:32.080 [2024-11-20 10:41:35.339174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.080 10:41:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87343 00:18:32.080 [2024-11-20 10:41:35.354851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.020 10:41:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:33.020 00:18:33.020 real 0m4.831s 00:18:33.020 user 0m6.958s 00:18:33.020 sys 0m0.794s 00:18:33.020 ************************************ 00:18:33.020 END TEST raid_state_function_test_sb_md_separate 00:18:33.020 ************************************ 00:18:33.020 10:41:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.020 10:41:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.020 10:41:36 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:33.020 10:41:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:33.020 10:41:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.020 10:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.020 ************************************ 00:18:33.020 START TEST raid_superblock_test_md_separate 00:18:33.020 ************************************ 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87591 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87591 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87591 ']' 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.020 10:41:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.280 [2024-11-20 10:41:36.551169] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:33.280 [2024-11-20 10:41:36.551376] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87591 ] 00:18:33.280 [2024-11-20 10:41:36.723446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.540 [2024-11-20 10:41:36.821119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.800 [2024-11-20 10:41:37.018993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.800 [2024-11-20 10:41:37.019023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.060 malloc1 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.060 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.060 [2024-11-20 10:41:37.402349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.060 [2024-11-20 10:41:37.402465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.060 [2024-11-20 10:41:37.402520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:34.060 [2024-11-20 10:41:37.402550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.060 [2024-11-20 10:41:37.404432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.060 [2024-11-20 10:41:37.404515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.060 pt1 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 malloc2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 [2024-11-20 10:41:37.462187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:34.061 [2024-11-20 10:41:37.462241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.061 [2024-11-20 10:41:37.462276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:34.061 [2024-11-20 10:41:37.462285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.061 [2024-11-20 10:41:37.464127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.061 [2024-11-20 10:41:37.464164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:34.061 pt2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 [2024-11-20 10:41:37.474192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.061 [2024-11-20 10:41:37.475929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.061 [2024-11-20 10:41:37.476102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:34.061 [2024-11-20 10:41:37.476117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.061 [2024-11-20 10:41:37.476203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:34.061 [2024-11-20 10:41:37.476326] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:34.061 [2024-11-20 10:41:37.476337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:34.061 [2024-11-20 10:41:37.476490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.061 "name": "raid_bdev1", 00:18:34.061 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:34.061 "strip_size_kb": 0, 00:18:34.061 "state": "online", 00:18:34.061 "raid_level": "raid1", 00:18:34.061 "superblock": true, 00:18:34.061 "num_base_bdevs": 2, 00:18:34.061 "num_base_bdevs_discovered": 2, 00:18:34.061 "num_base_bdevs_operational": 2, 00:18:34.061 "base_bdevs_list": [ 00:18:34.061 { 00:18:34.061 "name": "pt1", 00:18:34.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.061 "is_configured": true, 00:18:34.061 "data_offset": 256, 00:18:34.061 "data_size": 7936 00:18:34.061 }, 00:18:34.061 { 00:18:34.061 "name": "pt2", 00:18:34.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.061 "is_configured": true, 00:18:34.061 "data_offset": 256, 00:18:34.061 "data_size": 7936 00:18:34.061 } 00:18:34.061 ] 00:18:34.061 }' 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.061 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.630 [2024-11-20 10:41:37.901708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.630 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:34.630 "name": "raid_bdev1", 00:18:34.630 "aliases": [ 00:18:34.630 "625064d6-7905-49b8-b607-e00511035d0a" 00:18:34.630 ], 00:18:34.630 "product_name": "Raid Volume", 00:18:34.630 "block_size": 4096, 00:18:34.630 "num_blocks": 7936, 00:18:34.630 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:34.630 "md_size": 32, 00:18:34.630 "md_interleave": false, 00:18:34.630 "dif_type": 0, 00:18:34.630 "assigned_rate_limits": { 00:18:34.630 "rw_ios_per_sec": 0, 00:18:34.630 "rw_mbytes_per_sec": 0, 00:18:34.630 "r_mbytes_per_sec": 0, 00:18:34.630 "w_mbytes_per_sec": 0 00:18:34.630 }, 00:18:34.630 "claimed": false, 00:18:34.630 "zoned": false, 00:18:34.630 "supported_io_types": { 00:18:34.630 "read": true, 00:18:34.630 "write": true, 00:18:34.630 "unmap": false, 00:18:34.630 "flush": false, 00:18:34.630 "reset": true, 00:18:34.630 "nvme_admin": false, 00:18:34.630 "nvme_io": false, 00:18:34.630 "nvme_io_md": false, 00:18:34.630 "write_zeroes": true, 00:18:34.630 "zcopy": false, 00:18:34.630 "get_zone_info": false, 00:18:34.630 "zone_management": false, 00:18:34.630 "zone_append": false, 00:18:34.630 "compare": false, 00:18:34.630 "compare_and_write": false, 00:18:34.630 "abort": false, 00:18:34.630 "seek_hole": false, 00:18:34.630 "seek_data": false, 00:18:34.630 "copy": false, 00:18:34.630 "nvme_iov_md": false 00:18:34.630 }, 00:18:34.630 "memory_domains": [ 00:18:34.630 { 00:18:34.630 "dma_device_id": "system", 00:18:34.630 "dma_device_type": 1 00:18:34.630 }, 00:18:34.630 { 00:18:34.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.630 "dma_device_type": 2 00:18:34.630 }, 00:18:34.630 { 00:18:34.630 "dma_device_id": "system", 00:18:34.630 "dma_device_type": 1 00:18:34.630 }, 00:18:34.630 { 00:18:34.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.630 "dma_device_type": 2 00:18:34.630 } 00:18:34.630 ], 00:18:34.630 "driver_specific": { 00:18:34.630 "raid": { 00:18:34.630 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:34.630 "strip_size_kb": 0, 00:18:34.630 "state": "online", 00:18:34.630 "raid_level": "raid1", 00:18:34.630 "superblock": true, 00:18:34.630 "num_base_bdevs": 2, 00:18:34.630 "num_base_bdevs_discovered": 2, 00:18:34.630 "num_base_bdevs_operational": 2, 00:18:34.630 "base_bdevs_list": [ 00:18:34.630 { 00:18:34.630 "name": "pt1", 00:18:34.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.630 "is_configured": true, 00:18:34.630 "data_offset": 256, 00:18:34.631 "data_size": 7936 00:18:34.631 }, 00:18:34.631 { 00:18:34.631 "name": "pt2", 00:18:34.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.631 "is_configured": true, 00:18:34.631 "data_offset": 256, 00:18:34.631 "data_size": 7936 00:18:34.631 } 00:18:34.631 ] 00:18:34.631 } 00:18:34.631 } 00:18:34.631 }' 00:18:34.631 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.631 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:34.631 pt2' 00:18:34.631 10:41:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.631 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 [2024-11-20 10:41:38.133266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=625064d6-7905-49b8-b607-e00511035d0a 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 625064d6-7905-49b8-b607-e00511035d0a ']' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 [2024-11-20 10:41:38.176931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.891 [2024-11-20 10:41:38.176993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.891 [2024-11-20 10:41:38.177076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.891 [2024-11-20 10:41:38.177129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.891 [2024-11-20 10:41:38.177141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 [2024-11-20 10:41:38.316714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:34.891 [2024-11-20 10:41:38.318490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:34.891 [2024-11-20 10:41:38.318605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:34.891 [2024-11-20 10:41:38.318694] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:34.891 [2024-11-20 10:41:38.318743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.891 [2024-11-20 10:41:38.318773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:34.891 request: 00:18:34.891 { 00:18:34.891 "name": "raid_bdev1", 00:18:34.891 "raid_level": "raid1", 00:18:34.891 "base_bdevs": [ 00:18:34.891 "malloc1", 00:18:34.891 "malloc2" 00:18:34.891 ], 00:18:34.891 "superblock": false, 00:18:34.891 "method": "bdev_raid_create", 00:18:34.891 "req_id": 1 00:18:34.891 } 00:18:34.891 Got JSON-RPC error response 00:18:34.891 response: 00:18:34.891 { 00:18:34.891 "code": -17, 00:18:34.891 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:34.891 } 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.891 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.892 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.892 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.892 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:34.892 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.152 [2024-11-20 10:41:38.384571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.152 [2024-11-20 10:41:38.384659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.152 [2024-11-20 10:41:38.384690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:35.152 [2024-11-20 10:41:38.384737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.152 [2024-11-20 10:41:38.386581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.152 [2024-11-20 10:41:38.386650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.152 [2024-11-20 10:41:38.386715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:35.152 [2024-11-20 10:41:38.386797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.152 pt1 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.152 "name": "raid_bdev1", 00:18:35.152 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:35.152 "strip_size_kb": 0, 00:18:35.152 "state": "configuring", 00:18:35.152 "raid_level": "raid1", 00:18:35.152 "superblock": true, 00:18:35.152 "num_base_bdevs": 2, 00:18:35.152 "num_base_bdevs_discovered": 1, 00:18:35.152 "num_base_bdevs_operational": 2, 00:18:35.152 "base_bdevs_list": [ 00:18:35.152 { 00:18:35.152 "name": "pt1", 00:18:35.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.152 "is_configured": true, 00:18:35.152 "data_offset": 256, 00:18:35.152 "data_size": 7936 00:18:35.152 }, 00:18:35.152 { 00:18:35.152 "name": null, 00:18:35.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.152 "is_configured": false, 00:18:35.152 "data_offset": 256, 00:18:35.152 "data_size": 7936 00:18:35.152 } 00:18:35.152 ] 00:18:35.152 }' 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.152 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.412 [2024-11-20 10:41:38.791852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.412 [2024-11-20 10:41:38.791911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.412 [2024-11-20 10:41:38.791930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:35.412 [2024-11-20 10:41:38.791940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.412 [2024-11-20 10:41:38.792114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.412 [2024-11-20 10:41:38.792130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.412 [2024-11-20 10:41:38.792168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:35.412 [2024-11-20 10:41:38.792186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.412 [2024-11-20 10:41:38.792284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:35.412 [2024-11-20 10:41:38.792305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:35.412 [2024-11-20 10:41:38.792383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:35.412 [2024-11-20 10:41:38.792498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:35.412 [2024-11-20 10:41:38.792506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:35.412 [2024-11-20 10:41:38.792586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.412 pt2 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.412 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.413 "name": "raid_bdev1", 00:18:35.413 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:35.413 "strip_size_kb": 0, 00:18:35.413 "state": "online", 00:18:35.413 "raid_level": "raid1", 00:18:35.413 "superblock": true, 00:18:35.413 "num_base_bdevs": 2, 00:18:35.413 "num_base_bdevs_discovered": 2, 00:18:35.413 "num_base_bdevs_operational": 2, 00:18:35.413 "base_bdevs_list": [ 00:18:35.413 { 00:18:35.413 "name": "pt1", 00:18:35.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.413 "is_configured": true, 00:18:35.413 "data_offset": 256, 00:18:35.413 "data_size": 7936 00:18:35.413 }, 00:18:35.413 { 00:18:35.413 "name": "pt2", 00:18:35.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.413 "is_configured": true, 00:18:35.413 "data_offset": 256, 00:18:35.413 "data_size": 7936 00:18:35.413 } 00:18:35.413 ] 00:18:35.413 }' 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.413 10:41:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.981 [2024-11-20 10:41:39.215425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.981 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.981 "name": "raid_bdev1", 00:18:35.981 "aliases": [ 00:18:35.981 "625064d6-7905-49b8-b607-e00511035d0a" 00:18:35.981 ], 00:18:35.981 "product_name": "Raid Volume", 00:18:35.981 "block_size": 4096, 00:18:35.981 "num_blocks": 7936, 00:18:35.981 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:35.981 "md_size": 32, 00:18:35.981 "md_interleave": false, 00:18:35.981 "dif_type": 0, 00:18:35.981 "assigned_rate_limits": { 00:18:35.981 "rw_ios_per_sec": 0, 00:18:35.981 "rw_mbytes_per_sec": 0, 00:18:35.981 "r_mbytes_per_sec": 0, 00:18:35.981 "w_mbytes_per_sec": 0 00:18:35.982 }, 00:18:35.982 "claimed": false, 00:18:35.982 "zoned": false, 00:18:35.982 "supported_io_types": { 00:18:35.982 "read": true, 00:18:35.982 "write": true, 00:18:35.982 "unmap": false, 00:18:35.982 "flush": false, 00:18:35.982 "reset": true, 00:18:35.982 "nvme_admin": false, 00:18:35.982 "nvme_io": false, 00:18:35.982 "nvme_io_md": false, 00:18:35.982 "write_zeroes": true, 00:18:35.982 "zcopy": false, 00:18:35.982 "get_zone_info": false, 00:18:35.982 "zone_management": false, 00:18:35.982 "zone_append": false, 00:18:35.982 "compare": false, 00:18:35.982 "compare_and_write": false, 00:18:35.982 "abort": false, 00:18:35.982 "seek_hole": false, 00:18:35.982 "seek_data": false, 00:18:35.982 "copy": false, 00:18:35.982 "nvme_iov_md": false 00:18:35.982 }, 00:18:35.982 "memory_domains": [ 00:18:35.982 { 00:18:35.982 "dma_device_id": "system", 00:18:35.982 "dma_device_type": 1 00:18:35.982 }, 00:18:35.982 { 00:18:35.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.982 "dma_device_type": 2 00:18:35.982 }, 00:18:35.982 { 00:18:35.982 "dma_device_id": "system", 00:18:35.982 "dma_device_type": 1 00:18:35.982 }, 00:18:35.982 { 00:18:35.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.982 "dma_device_type": 2 00:18:35.982 } 00:18:35.982 ], 00:18:35.982 "driver_specific": { 00:18:35.982 "raid": { 00:18:35.982 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:35.982 "strip_size_kb": 0, 00:18:35.982 "state": "online", 00:18:35.982 "raid_level": "raid1", 00:18:35.982 "superblock": true, 00:18:35.982 "num_base_bdevs": 2, 00:18:35.982 "num_base_bdevs_discovered": 2, 00:18:35.982 "num_base_bdevs_operational": 2, 00:18:35.982 "base_bdevs_list": [ 00:18:35.982 { 00:18:35.982 "name": "pt1", 00:18:35.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.982 "is_configured": true, 00:18:35.982 "data_offset": 256, 00:18:35.982 "data_size": 7936 00:18:35.982 }, 00:18:35.982 { 00:18:35.982 "name": "pt2", 00:18:35.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.982 "is_configured": true, 00:18:35.982 "data_offset": 256, 00:18:35.982 "data_size": 7936 00:18:35.982 } 00:18:35.982 ] 00:18:35.982 } 00:18:35.982 } 00:18:35.982 }' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:35.982 pt2' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:35.982 [2024-11-20 10:41:39.446984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.982 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 625064d6-7905-49b8-b607-e00511035d0a '!=' 625064d6-7905-49b8-b607-e00511035d0a ']' 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.242 [2024-11-20 10:41:39.494692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.242 "name": "raid_bdev1", 00:18:36.242 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:36.242 "strip_size_kb": 0, 00:18:36.242 "state": "online", 00:18:36.242 "raid_level": "raid1", 00:18:36.242 "superblock": true, 00:18:36.242 "num_base_bdevs": 2, 00:18:36.242 "num_base_bdevs_discovered": 1, 00:18:36.242 "num_base_bdevs_operational": 1, 00:18:36.242 "base_bdevs_list": [ 00:18:36.242 { 00:18:36.242 "name": null, 00:18:36.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.242 "is_configured": false, 00:18:36.242 "data_offset": 0, 00:18:36.242 "data_size": 7936 00:18:36.242 }, 00:18:36.242 { 00:18:36.242 "name": "pt2", 00:18:36.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.242 "is_configured": true, 00:18:36.242 "data_offset": 256, 00:18:36.242 "data_size": 7936 00:18:36.242 } 00:18:36.242 ] 00:18:36.242 }' 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.242 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.503 [2024-11-20 10:41:39.877995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.503 [2024-11-20 10:41:39.878018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.503 [2024-11-20 10:41:39.878077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.503 [2024-11-20 10:41:39.878120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.503 [2024-11-20 10:41:39.878130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.503 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.503 [2024-11-20 10:41:39.953861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.503 [2024-11-20 10:41:39.953968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.503 [2024-11-20 10:41:39.954003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:36.503 [2024-11-20 10:41:39.954049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.503 [2024-11-20 10:41:39.955979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.503 [2024-11-20 10:41:39.956055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.503 [2024-11-20 10:41:39.956123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:36.503 [2024-11-20 10:41:39.956175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.503 [2024-11-20 10:41:39.956267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:36.503 [2024-11-20 10:41:39.956279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.503 [2024-11-20 10:41:39.956345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:36.503 [2024-11-20 10:41:39.956473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:36.503 [2024-11-20 10:41:39.956481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:36.504 [2024-11-20 10:41:39.956575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.504 pt2 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.504 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.764 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.764 "name": "raid_bdev1", 00:18:36.764 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:36.764 "strip_size_kb": 0, 00:18:36.764 "state": "online", 00:18:36.764 "raid_level": "raid1", 00:18:36.764 "superblock": true, 00:18:36.764 "num_base_bdevs": 2, 00:18:36.764 "num_base_bdevs_discovered": 1, 00:18:36.764 "num_base_bdevs_operational": 1, 00:18:36.764 "base_bdevs_list": [ 00:18:36.764 { 00:18:36.764 "name": null, 00:18:36.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.764 "is_configured": false, 00:18:36.764 "data_offset": 256, 00:18:36.764 "data_size": 7936 00:18:36.764 }, 00:18:36.764 { 00:18:36.764 "name": "pt2", 00:18:36.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.764 "is_configured": true, 00:18:36.764 "data_offset": 256, 00:18:36.764 "data_size": 7936 00:18:36.764 } 00:18:36.764 ] 00:18:36.764 }' 00:18:36.764 10:41:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.764 10:41:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.025 [2024-11-20 10:41:40.313265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.025 [2024-11-20 10:41:40.313334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.025 [2024-11-20 10:41:40.313438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.025 [2024-11-20 10:41:40.313510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.025 [2024-11-20 10:41:40.313552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.025 [2024-11-20 10:41:40.377181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.025 [2024-11-20 10:41:40.377275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.025 [2024-11-20 10:41:40.377315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:37.025 [2024-11-20 10:41:40.377346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.025 [2024-11-20 10:41:40.379239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.025 [2024-11-20 10:41:40.379307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.025 [2024-11-20 10:41:40.379396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:37.025 [2024-11-20 10:41:40.379468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.025 [2024-11-20 10:41:40.379627] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:37.025 [2024-11-20 10:41:40.379677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.025 [2024-11-20 10:41:40.379713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:37.025 [2024-11-20 10:41:40.379820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.025 [2024-11-20 10:41:40.379917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:37.025 [2024-11-20 10:41:40.379953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.025 [2024-11-20 10:41:40.380045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:37.025 [2024-11-20 10:41:40.380176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:37.025 [2024-11-20 10:41:40.380214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:37.025 [2024-11-20 10:41:40.380364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.025 pt1 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.025 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.026 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.026 "name": "raid_bdev1", 00:18:37.026 "uuid": "625064d6-7905-49b8-b607-e00511035d0a", 00:18:37.026 "strip_size_kb": 0, 00:18:37.026 "state": "online", 00:18:37.026 "raid_level": "raid1", 00:18:37.026 "superblock": true, 00:18:37.026 "num_base_bdevs": 2, 00:18:37.026 "num_base_bdevs_discovered": 1, 00:18:37.026 "num_base_bdevs_operational": 1, 00:18:37.026 "base_bdevs_list": [ 00:18:37.026 { 00:18:37.026 "name": null, 00:18:37.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.026 "is_configured": false, 00:18:37.026 "data_offset": 256, 00:18:37.026 "data_size": 7936 00:18:37.026 }, 00:18:37.026 { 00:18:37.026 "name": "pt2", 00:18:37.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.026 "is_configured": true, 00:18:37.026 "data_offset": 256, 00:18:37.026 "data_size": 7936 00:18:37.026 } 00:18:37.026 ] 00:18:37.026 }' 00:18:37.026 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.026 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.637 [2024-11-20 10:41:40.840596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 625064d6-7905-49b8-b607-e00511035d0a '!=' 625064d6-7905-49b8-b607-e00511035d0a ']' 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87591 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87591 ']' 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87591 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87591 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.637 killing process with pid 87591 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87591' 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87591 00:18:37.637 [2024-11-20 10:41:40.907602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.637 [2024-11-20 10:41:40.907677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.637 [2024-11-20 10:41:40.907719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.637 [2024-11-20 10:41:40.907733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:37.637 10:41:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87591 00:18:37.897 [2024-11-20 10:41:41.115570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.837 10:41:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:38.837 00:18:38.837 real 0m5.655s 00:18:38.837 user 0m8.561s 00:18:38.837 sys 0m0.971s 00:18:38.837 10:41:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.837 ************************************ 00:18:38.837 END TEST raid_superblock_test_md_separate 00:18:38.837 ************************************ 00:18:38.837 10:41:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 10:41:42 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:38.837 10:41:42 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:38.837 10:41:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:38.837 10:41:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.837 10:41:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 ************************************ 00:18:38.837 START TEST raid_rebuild_test_sb_md_separate 00:18:38.837 ************************************ 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87914 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87914 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87914 ']' 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.837 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.837 Zero copy mechanism will not be used. 00:18:38.837 [2024-11-20 10:41:42.283465] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:38.837 [2024-11-20 10:41:42.283678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87914 ] 00:18:39.097 [2024-11-20 10:41:42.444992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.097 [2024-11-20 10:41:42.557317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.356 [2024-11-20 10:41:42.749124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.356 [2024-11-20 10:41:42.749209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.615 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.615 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 BaseBdev1_malloc 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 [2024-11-20 10:41:43.140834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.875 [2024-11-20 10:41:43.140946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.875 [2024-11-20 10:41:43.140971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:39.875 [2024-11-20 10:41:43.140983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.875 [2024-11-20 10:41:43.142920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.875 [2024-11-20 10:41:43.142959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.875 BaseBdev1 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 BaseBdev2_malloc 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 [2024-11-20 10:41:43.194621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:39.875 [2024-11-20 10:41:43.194694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.875 [2024-11-20 10:41:43.194712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.875 [2024-11-20 10:41:43.194722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.875 [2024-11-20 10:41:43.196541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.875 [2024-11-20 10:41:43.196616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:39.875 BaseBdev2 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 spare_malloc 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 spare_delay 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 [2024-11-20 10:41:43.292123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.875 [2024-11-20 10:41:43.292195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.875 [2024-11-20 10:41:43.292214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:39.875 [2024-11-20 10:41:43.292225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.875 [2024-11-20 10:41:43.294030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.875 [2024-11-20 10:41:43.294071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.875 spare 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 [2024-11-20 10:41:43.304150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.875 [2024-11-20 10:41:43.305849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.876 [2024-11-20 10:41:43.306031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:39.876 [2024-11-20 10:41:43.306046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:39.876 [2024-11-20 10:41:43.306113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:39.876 [2024-11-20 10:41:43.306236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:39.876 [2024-11-20 10:41:43.306244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:39.876 [2024-11-20 10:41:43.306349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.876 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.135 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.135 "name": "raid_bdev1", 00:18:40.135 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:40.135 "strip_size_kb": 0, 00:18:40.135 "state": "online", 00:18:40.135 "raid_level": "raid1", 00:18:40.135 "superblock": true, 00:18:40.135 "num_base_bdevs": 2, 00:18:40.135 "num_base_bdevs_discovered": 2, 00:18:40.135 "num_base_bdevs_operational": 2, 00:18:40.135 "base_bdevs_list": [ 00:18:40.135 { 00:18:40.135 "name": "BaseBdev1", 00:18:40.135 "uuid": "613f397b-634f-5a45-a05a-875c0c948d67", 00:18:40.135 "is_configured": true, 00:18:40.135 "data_offset": 256, 00:18:40.135 "data_size": 7936 00:18:40.135 }, 00:18:40.135 { 00:18:40.135 "name": "BaseBdev2", 00:18:40.135 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:40.135 "is_configured": true, 00:18:40.135 "data_offset": 256, 00:18:40.135 "data_size": 7936 00:18:40.135 } 00:18:40.135 ] 00:18:40.135 }' 00:18:40.135 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.135 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.395 [2024-11-20 10:41:43.763656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.395 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:40.655 [2024-11-20 10:41:44.015010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.655 /dev/nbd0 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:40.655 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.656 1+0 records in 00:18:40.656 1+0 records out 00:18:40.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325435 s, 12.6 MB/s 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:40.656 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:41.225 7936+0 records in 00:18:41.225 7936+0 records out 00:18:41.225 32505856 bytes (33 MB, 31 MiB) copied, 0.57648 s, 56.4 MB/s 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.225 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.485 [2024-11-20 10:41:44.865424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.485 [2024-11-20 10:41:44.881508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.485 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.486 "name": "raid_bdev1", 00:18:41.486 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:41.486 "strip_size_kb": 0, 00:18:41.486 "state": "online", 00:18:41.486 "raid_level": "raid1", 00:18:41.486 "superblock": true, 00:18:41.486 "num_base_bdevs": 2, 00:18:41.486 "num_base_bdevs_discovered": 1, 00:18:41.486 "num_base_bdevs_operational": 1, 00:18:41.486 "base_bdevs_list": [ 00:18:41.486 { 00:18:41.486 "name": null, 00:18:41.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.486 "is_configured": false, 00:18:41.486 "data_offset": 0, 00:18:41.486 "data_size": 7936 00:18:41.486 }, 00:18:41.486 { 00:18:41.486 "name": "BaseBdev2", 00:18:41.486 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:41.486 "is_configured": true, 00:18:41.486 "data_offset": 256, 00:18:41.486 "data_size": 7936 00:18:41.486 } 00:18:41.486 ] 00:18:41.486 }' 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.486 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.056 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.056 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.056 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.056 [2024-11-20 10:41:45.360674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.056 [2024-11-20 10:41:45.374159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:42.056 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.056 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:42.056 [2024-11-20 10:41:45.375935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.993 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.993 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.993 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.993 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.993 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.993 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.994 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.994 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.994 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.994 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.994 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.994 "name": "raid_bdev1", 00:18:42.994 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:42.994 "strip_size_kb": 0, 00:18:42.994 "state": "online", 00:18:42.994 "raid_level": "raid1", 00:18:42.994 "superblock": true, 00:18:42.994 "num_base_bdevs": 2, 00:18:42.994 "num_base_bdevs_discovered": 2, 00:18:42.994 "num_base_bdevs_operational": 2, 00:18:42.994 "process": { 00:18:42.994 "type": "rebuild", 00:18:42.994 "target": "spare", 00:18:42.994 "progress": { 00:18:42.994 "blocks": 2560, 00:18:42.994 "percent": 32 00:18:42.994 } 00:18:42.994 }, 00:18:42.994 "base_bdevs_list": [ 00:18:42.994 { 00:18:42.994 "name": "spare", 00:18:42.994 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:42.994 "is_configured": true, 00:18:42.994 "data_offset": 256, 00:18:42.994 "data_size": 7936 00:18:42.994 }, 00:18:42.994 { 00:18:42.994 "name": "BaseBdev2", 00:18:42.994 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:42.994 "is_configured": true, 00:18:42.994 "data_offset": 256, 00:18:42.994 "data_size": 7936 00:18:42.994 } 00:18:42.994 ] 00:18:42.994 }' 00:18:42.994 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.252 [2024-11-20 10:41:46.536180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.252 [2024-11-20 10:41:46.580629] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.252 [2024-11-20 10:41:46.580748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.252 [2024-11-20 10:41:46.580765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.252 [2024-11-20 10:41:46.580774] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.252 "name": "raid_bdev1", 00:18:43.252 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:43.252 "strip_size_kb": 0, 00:18:43.252 "state": "online", 00:18:43.252 "raid_level": "raid1", 00:18:43.252 "superblock": true, 00:18:43.252 "num_base_bdevs": 2, 00:18:43.252 "num_base_bdevs_discovered": 1, 00:18:43.252 "num_base_bdevs_operational": 1, 00:18:43.252 "base_bdevs_list": [ 00:18:43.252 { 00:18:43.252 "name": null, 00:18:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.252 "is_configured": false, 00:18:43.252 "data_offset": 0, 00:18:43.252 "data_size": 7936 00:18:43.252 }, 00:18:43.252 { 00:18:43.252 "name": "BaseBdev2", 00:18:43.252 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:43.252 "is_configured": true, 00:18:43.252 "data_offset": 256, 00:18:43.252 "data_size": 7936 00:18:43.252 } 00:18:43.252 ] 00:18:43.252 }' 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.252 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.821 "name": "raid_bdev1", 00:18:43.821 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:43.821 "strip_size_kb": 0, 00:18:43.821 "state": "online", 00:18:43.821 "raid_level": "raid1", 00:18:43.821 "superblock": true, 00:18:43.821 "num_base_bdevs": 2, 00:18:43.821 "num_base_bdevs_discovered": 1, 00:18:43.821 "num_base_bdevs_operational": 1, 00:18:43.821 "base_bdevs_list": [ 00:18:43.821 { 00:18:43.821 "name": null, 00:18:43.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.821 "is_configured": false, 00:18:43.821 "data_offset": 0, 00:18:43.821 "data_size": 7936 00:18:43.821 }, 00:18:43.821 { 00:18:43.821 "name": "BaseBdev2", 00:18:43.821 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:43.821 "is_configured": true, 00:18:43.821 "data_offset": 256, 00:18:43.821 "data_size": 7936 00:18:43.821 } 00:18:43.821 ] 00:18:43.821 }' 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.821 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.822 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.822 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.822 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.822 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.822 [2024-11-20 10:41:47.179087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.822 [2024-11-20 10:41:47.192905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:43.822 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.822 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:43.822 [2024-11-20 10:41:47.194675] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.761 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.021 "name": "raid_bdev1", 00:18:45.021 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:45.021 "strip_size_kb": 0, 00:18:45.021 "state": "online", 00:18:45.021 "raid_level": "raid1", 00:18:45.021 "superblock": true, 00:18:45.021 "num_base_bdevs": 2, 00:18:45.021 "num_base_bdevs_discovered": 2, 00:18:45.021 "num_base_bdevs_operational": 2, 00:18:45.021 "process": { 00:18:45.021 "type": "rebuild", 00:18:45.021 "target": "spare", 00:18:45.021 "progress": { 00:18:45.021 "blocks": 2560, 00:18:45.021 "percent": 32 00:18:45.021 } 00:18:45.021 }, 00:18:45.021 "base_bdevs_list": [ 00:18:45.021 { 00:18:45.021 "name": "spare", 00:18:45.021 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:45.021 "is_configured": true, 00:18:45.021 "data_offset": 256, 00:18:45.021 "data_size": 7936 00:18:45.021 }, 00:18:45.021 { 00:18:45.021 "name": "BaseBdev2", 00:18:45.021 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:45.021 "is_configured": true, 00:18:45.021 "data_offset": 256, 00:18:45.021 "data_size": 7936 00:18:45.021 } 00:18:45.021 ] 00:18:45.021 }' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:45.021 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=713 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.021 "name": "raid_bdev1", 00:18:45.021 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:45.021 "strip_size_kb": 0, 00:18:45.021 "state": "online", 00:18:45.021 "raid_level": "raid1", 00:18:45.021 "superblock": true, 00:18:45.021 "num_base_bdevs": 2, 00:18:45.021 "num_base_bdevs_discovered": 2, 00:18:45.021 "num_base_bdevs_operational": 2, 00:18:45.021 "process": { 00:18:45.021 "type": "rebuild", 00:18:45.021 "target": "spare", 00:18:45.021 "progress": { 00:18:45.021 "blocks": 2816, 00:18:45.021 "percent": 35 00:18:45.021 } 00:18:45.021 }, 00:18:45.021 "base_bdevs_list": [ 00:18:45.021 { 00:18:45.021 "name": "spare", 00:18:45.021 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:45.021 "is_configured": true, 00:18:45.021 "data_offset": 256, 00:18:45.021 "data_size": 7936 00:18:45.021 }, 00:18:45.021 { 00:18:45.021 "name": "BaseBdev2", 00:18:45.021 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:45.021 "is_configured": true, 00:18:45.021 "data_offset": 256, 00:18:45.021 "data_size": 7936 00:18:45.021 } 00:18:45.021 ] 00:18:45.021 }' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.021 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.022 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.022 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.022 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.403 "name": "raid_bdev1", 00:18:46.403 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:46.403 "strip_size_kb": 0, 00:18:46.403 "state": "online", 00:18:46.403 "raid_level": "raid1", 00:18:46.403 "superblock": true, 00:18:46.403 "num_base_bdevs": 2, 00:18:46.403 "num_base_bdevs_discovered": 2, 00:18:46.403 "num_base_bdevs_operational": 2, 00:18:46.403 "process": { 00:18:46.403 "type": "rebuild", 00:18:46.403 "target": "spare", 00:18:46.403 "progress": { 00:18:46.403 "blocks": 5632, 00:18:46.403 "percent": 70 00:18:46.403 } 00:18:46.403 }, 00:18:46.403 "base_bdevs_list": [ 00:18:46.403 { 00:18:46.403 "name": "spare", 00:18:46.403 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:46.403 "is_configured": true, 00:18:46.403 "data_offset": 256, 00:18:46.403 "data_size": 7936 00:18:46.403 }, 00:18:46.403 { 00:18:46.403 "name": "BaseBdev2", 00:18:46.403 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:46.403 "is_configured": true, 00:18:46.403 "data_offset": 256, 00:18:46.403 "data_size": 7936 00:18:46.403 } 00:18:46.403 ] 00:18:46.403 }' 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.403 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.973 [2024-11-20 10:41:50.306499] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:46.973 [2024-11-20 10:41:50.306615] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:46.973 [2024-11-20 10:41:50.306726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.234 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.234 "name": "raid_bdev1", 00:18:47.234 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:47.234 "strip_size_kb": 0, 00:18:47.234 "state": "online", 00:18:47.234 "raid_level": "raid1", 00:18:47.234 "superblock": true, 00:18:47.234 "num_base_bdevs": 2, 00:18:47.234 "num_base_bdevs_discovered": 2, 00:18:47.234 "num_base_bdevs_operational": 2, 00:18:47.234 "base_bdevs_list": [ 00:18:47.234 { 00:18:47.234 "name": "spare", 00:18:47.234 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:47.234 "is_configured": true, 00:18:47.234 "data_offset": 256, 00:18:47.235 "data_size": 7936 00:18:47.235 }, 00:18:47.235 { 00:18:47.235 "name": "BaseBdev2", 00:18:47.235 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:47.235 "is_configured": true, 00:18:47.235 "data_offset": 256, 00:18:47.235 "data_size": 7936 00:18:47.235 } 00:18:47.235 ] 00:18:47.235 }' 00:18:47.235 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.494 "name": "raid_bdev1", 00:18:47.494 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:47.494 "strip_size_kb": 0, 00:18:47.494 "state": "online", 00:18:47.494 "raid_level": "raid1", 00:18:47.494 "superblock": true, 00:18:47.494 "num_base_bdevs": 2, 00:18:47.494 "num_base_bdevs_discovered": 2, 00:18:47.494 "num_base_bdevs_operational": 2, 00:18:47.494 "base_bdevs_list": [ 00:18:47.494 { 00:18:47.494 "name": "spare", 00:18:47.494 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:47.494 "is_configured": true, 00:18:47.494 "data_offset": 256, 00:18:47.494 "data_size": 7936 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "name": "BaseBdev2", 00:18:47.494 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:47.494 "is_configured": true, 00:18:47.494 "data_offset": 256, 00:18:47.494 "data_size": 7936 00:18:47.494 } 00:18:47.494 ] 00:18:47.494 }' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.494 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.495 "name": "raid_bdev1", 00:18:47.495 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:47.495 "strip_size_kb": 0, 00:18:47.495 "state": "online", 00:18:47.495 "raid_level": "raid1", 00:18:47.495 "superblock": true, 00:18:47.495 "num_base_bdevs": 2, 00:18:47.495 "num_base_bdevs_discovered": 2, 00:18:47.495 "num_base_bdevs_operational": 2, 00:18:47.495 "base_bdevs_list": [ 00:18:47.495 { 00:18:47.495 "name": "spare", 00:18:47.495 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:47.495 "is_configured": true, 00:18:47.495 "data_offset": 256, 00:18:47.495 "data_size": 7936 00:18:47.495 }, 00:18:47.495 { 00:18:47.495 "name": "BaseBdev2", 00:18:47.495 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:47.495 "is_configured": true, 00:18:47.495 "data_offset": 256, 00:18:47.495 "data_size": 7936 00:18:47.495 } 00:18:47.495 ] 00:18:47.495 }' 00:18:47.495 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.495 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.062 [2024-11-20 10:41:51.316251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.062 [2024-11-20 10:41:51.316325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.062 [2024-11-20 10:41:51.316447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.062 [2024-11-20 10:41:51.316553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.062 [2024-11-20 10:41:51.316598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.062 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:48.321 /dev/nbd0 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.321 1+0 records in 00:18:48.321 1+0 records out 00:18:48.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340227 s, 12.0 MB/s 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.321 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:48.580 /dev/nbd1 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.580 1+0 records in 00:18:48.580 1+0 records out 00:18:48.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229067 s, 17.9 MB/s 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.580 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.580 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.839 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.097 [2024-11-20 10:41:52.460602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.097 [2024-11-20 10:41:52.460720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.097 [2024-11-20 10:41:52.460774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:49.097 [2024-11-20 10:41:52.460804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.097 [2024-11-20 10:41:52.462708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.097 [2024-11-20 10:41:52.462776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.097 [2024-11-20 10:41:52.462877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.097 [2024-11-20 10:41:52.462960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.097 [2024-11-20 10:41:52.463132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.097 spare 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.097 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.097 [2024-11-20 10:41:52.563043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:49.097 [2024-11-20 10:41:52.563108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:49.098 [2024-11-20 10:41:52.563226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:49.098 [2024-11-20 10:41:52.563416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:49.098 [2024-11-20 10:41:52.563458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:49.098 [2024-11-20 10:41:52.563618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.098 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.356 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.356 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.356 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.356 "name": "raid_bdev1", 00:18:49.356 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:49.356 "strip_size_kb": 0, 00:18:49.356 "state": "online", 00:18:49.356 "raid_level": "raid1", 00:18:49.356 "superblock": true, 00:18:49.356 "num_base_bdevs": 2, 00:18:49.356 "num_base_bdevs_discovered": 2, 00:18:49.356 "num_base_bdevs_operational": 2, 00:18:49.356 "base_bdevs_list": [ 00:18:49.356 { 00:18:49.356 "name": "spare", 00:18:49.356 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:49.356 "is_configured": true, 00:18:49.356 "data_offset": 256, 00:18:49.356 "data_size": 7936 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "name": "BaseBdev2", 00:18:49.356 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:49.356 "is_configured": true, 00:18:49.356 "data_offset": 256, 00:18:49.356 "data_size": 7936 00:18:49.356 } 00:18:49.356 ] 00:18:49.356 }' 00:18:49.356 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.356 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.615 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.873 "name": "raid_bdev1", 00:18:49.873 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:49.873 "strip_size_kb": 0, 00:18:49.873 "state": "online", 00:18:49.873 "raid_level": "raid1", 00:18:49.873 "superblock": true, 00:18:49.873 "num_base_bdevs": 2, 00:18:49.873 "num_base_bdevs_discovered": 2, 00:18:49.873 "num_base_bdevs_operational": 2, 00:18:49.873 "base_bdevs_list": [ 00:18:49.873 { 00:18:49.873 "name": "spare", 00:18:49.873 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:49.873 "is_configured": true, 00:18:49.873 "data_offset": 256, 00:18:49.873 "data_size": 7936 00:18:49.873 }, 00:18:49.873 { 00:18:49.873 "name": "BaseBdev2", 00:18:49.873 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:49.873 "is_configured": true, 00:18:49.873 "data_offset": 256, 00:18:49.873 "data_size": 7936 00:18:49.873 } 00:18:49.873 ] 00:18:49.873 }' 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.873 [2024-11-20 10:41:53.231443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.873 "name": "raid_bdev1", 00:18:49.873 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:49.873 "strip_size_kb": 0, 00:18:49.873 "state": "online", 00:18:49.873 "raid_level": "raid1", 00:18:49.873 "superblock": true, 00:18:49.873 "num_base_bdevs": 2, 00:18:49.873 "num_base_bdevs_discovered": 1, 00:18:49.873 "num_base_bdevs_operational": 1, 00:18:49.873 "base_bdevs_list": [ 00:18:49.873 { 00:18:49.873 "name": null, 00:18:49.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.873 "is_configured": false, 00:18:49.873 "data_offset": 0, 00:18:49.873 "data_size": 7936 00:18:49.873 }, 00:18:49.873 { 00:18:49.873 "name": "BaseBdev2", 00:18:49.873 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:49.873 "is_configured": true, 00:18:49.873 "data_offset": 256, 00:18:49.873 "data_size": 7936 00:18:49.873 } 00:18:49.873 ] 00:18:49.873 }' 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.873 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.440 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.440 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.440 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.440 [2024-11-20 10:41:53.614782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.440 [2024-11-20 10:41:53.614956] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.440 [2024-11-20 10:41:53.614973] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.440 [2024-11-20 10:41:53.615009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.440 [2024-11-20 10:41:53.627903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:50.440 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.440 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:50.440 [2024-11-20 10:41:53.629669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.376 "name": "raid_bdev1", 00:18:51.376 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:51.376 "strip_size_kb": 0, 00:18:51.376 "state": "online", 00:18:51.376 "raid_level": "raid1", 00:18:51.376 "superblock": true, 00:18:51.376 "num_base_bdevs": 2, 00:18:51.376 "num_base_bdevs_discovered": 2, 00:18:51.376 "num_base_bdevs_operational": 2, 00:18:51.376 "process": { 00:18:51.376 "type": "rebuild", 00:18:51.376 "target": "spare", 00:18:51.376 "progress": { 00:18:51.376 "blocks": 2560, 00:18:51.376 "percent": 32 00:18:51.376 } 00:18:51.376 }, 00:18:51.376 "base_bdevs_list": [ 00:18:51.376 { 00:18:51.376 "name": "spare", 00:18:51.376 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:51.376 "is_configured": true, 00:18:51.376 "data_offset": 256, 00:18:51.376 "data_size": 7936 00:18:51.376 }, 00:18:51.376 { 00:18:51.376 "name": "BaseBdev2", 00:18:51.376 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:51.376 "is_configured": true, 00:18:51.376 "data_offset": 256, 00:18:51.376 "data_size": 7936 00:18:51.376 } 00:18:51.376 ] 00:18:51.376 }' 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.376 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.376 [2024-11-20 10:41:54.794062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.376 [2024-11-20 10:41:54.834446] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.376 [2024-11-20 10:41:54.834500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.376 [2024-11-20 10:41:54.834514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.376 [2024-11-20 10:41:54.834533] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.635 "name": "raid_bdev1", 00:18:51.635 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:51.635 "strip_size_kb": 0, 00:18:51.635 "state": "online", 00:18:51.635 "raid_level": "raid1", 00:18:51.635 "superblock": true, 00:18:51.635 "num_base_bdevs": 2, 00:18:51.635 "num_base_bdevs_discovered": 1, 00:18:51.635 "num_base_bdevs_operational": 1, 00:18:51.635 "base_bdevs_list": [ 00:18:51.635 { 00:18:51.635 "name": null, 00:18:51.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.635 "is_configured": false, 00:18:51.635 "data_offset": 0, 00:18:51.635 "data_size": 7936 00:18:51.635 }, 00:18:51.635 { 00:18:51.635 "name": "BaseBdev2", 00:18:51.635 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:51.635 "is_configured": true, 00:18:51.635 "data_offset": 256, 00:18:51.635 "data_size": 7936 00:18:51.635 } 00:18:51.635 ] 00:18:51.635 }' 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.635 10:41:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.895 10:41:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.895 10:41:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.895 10:41:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.895 [2024-11-20 10:41:55.256841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.895 [2024-11-20 10:41:55.256949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.895 [2024-11-20 10:41:55.256990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:51.895 [2024-11-20 10:41:55.257021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.895 [2024-11-20 10:41:55.257281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.895 [2024-11-20 10:41:55.257334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.895 [2024-11-20 10:41:55.257433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:51.895 [2024-11-20 10:41:55.257472] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.895 [2024-11-20 10:41:55.257508] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:51.895 [2024-11-20 10:41:55.257586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.895 [2024-11-20 10:41:55.270615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:51.895 spare 00:18:51.895 10:41:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.895 [2024-11-20 10:41:55.272425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.895 10:41:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.834 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.094 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.094 "name": "raid_bdev1", 00:18:53.094 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:53.094 "strip_size_kb": 0, 00:18:53.094 "state": "online", 00:18:53.094 "raid_level": "raid1", 00:18:53.094 "superblock": true, 00:18:53.094 "num_base_bdevs": 2, 00:18:53.094 "num_base_bdevs_discovered": 2, 00:18:53.094 "num_base_bdevs_operational": 2, 00:18:53.094 "process": { 00:18:53.094 "type": "rebuild", 00:18:53.094 "target": "spare", 00:18:53.094 "progress": { 00:18:53.094 "blocks": 2560, 00:18:53.094 "percent": 32 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 "base_bdevs_list": [ 00:18:53.094 { 00:18:53.094 "name": "spare", 00:18:53.094 "uuid": "91f5a8a2-33b0-5272-93a1-08ff7c23a085", 00:18:53.094 "is_configured": true, 00:18:53.094 "data_offset": 256, 00:18:53.094 "data_size": 7936 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "name": "BaseBdev2", 00:18:53.094 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:53.095 "is_configured": true, 00:18:53.095 "data_offset": 256, 00:18:53.095 "data_size": 7936 00:18:53.095 } 00:18:53.095 ] 00:18:53.095 }' 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.095 [2024-11-20 10:41:56.416290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.095 [2024-11-20 10:41:56.477170] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.095 [2024-11-20 10:41:56.477288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.095 [2024-11-20 10:41:56.477308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.095 [2024-11-20 10:41:56.477315] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.095 "name": "raid_bdev1", 00:18:53.095 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:53.095 "strip_size_kb": 0, 00:18:53.095 "state": "online", 00:18:53.095 "raid_level": "raid1", 00:18:53.095 "superblock": true, 00:18:53.095 "num_base_bdevs": 2, 00:18:53.095 "num_base_bdevs_discovered": 1, 00:18:53.095 "num_base_bdevs_operational": 1, 00:18:53.095 "base_bdevs_list": [ 00:18:53.095 { 00:18:53.095 "name": null, 00:18:53.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.095 "is_configured": false, 00:18:53.095 "data_offset": 0, 00:18:53.095 "data_size": 7936 00:18:53.095 }, 00:18:53.095 { 00:18:53.095 "name": "BaseBdev2", 00:18:53.095 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:53.095 "is_configured": true, 00:18:53.095 "data_offset": 256, 00:18:53.095 "data_size": 7936 00:18:53.095 } 00:18:53.095 ] 00:18:53.095 }' 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.095 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.666 "name": "raid_bdev1", 00:18:53.666 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:53.666 "strip_size_kb": 0, 00:18:53.666 "state": "online", 00:18:53.666 "raid_level": "raid1", 00:18:53.666 "superblock": true, 00:18:53.666 "num_base_bdevs": 2, 00:18:53.666 "num_base_bdevs_discovered": 1, 00:18:53.666 "num_base_bdevs_operational": 1, 00:18:53.666 "base_bdevs_list": [ 00:18:53.666 { 00:18:53.666 "name": null, 00:18:53.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.666 "is_configured": false, 00:18:53.666 "data_offset": 0, 00:18:53.666 "data_size": 7936 00:18:53.666 }, 00:18:53.666 { 00:18:53.666 "name": "BaseBdev2", 00:18:53.666 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:53.666 "is_configured": true, 00:18:53.666 "data_offset": 256, 00:18:53.666 "data_size": 7936 00:18:53.666 } 00:18:53.666 ] 00:18:53.666 }' 00:18:53.666 10:41:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 [2024-11-20 10:41:57.087894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:53.666 [2024-11-20 10:41:57.087952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.666 [2024-11-20 10:41:57.087978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:53.666 [2024-11-20 10:41:57.087987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.666 [2024-11-20 10:41:57.088180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.666 [2024-11-20 10:41:57.088191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:53.666 [2024-11-20 10:41:57.088237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:53.666 [2024-11-20 10:41:57.088248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.666 [2024-11-20 10:41:57.088257] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.666 [2024-11-20 10:41:57.088266] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:53.666 BaseBdev1 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.666 10:41:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.048 "name": "raid_bdev1", 00:18:55.048 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:55.048 "strip_size_kb": 0, 00:18:55.048 "state": "online", 00:18:55.048 "raid_level": "raid1", 00:18:55.048 "superblock": true, 00:18:55.048 "num_base_bdevs": 2, 00:18:55.048 "num_base_bdevs_discovered": 1, 00:18:55.048 "num_base_bdevs_operational": 1, 00:18:55.048 "base_bdevs_list": [ 00:18:55.048 { 00:18:55.048 "name": null, 00:18:55.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.048 "is_configured": false, 00:18:55.048 "data_offset": 0, 00:18:55.048 "data_size": 7936 00:18:55.048 }, 00:18:55.048 { 00:18:55.048 "name": "BaseBdev2", 00:18:55.048 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:55.048 "is_configured": true, 00:18:55.048 "data_offset": 256, 00:18:55.048 "data_size": 7936 00:18:55.048 } 00:18:55.048 ] 00:18:55.048 }' 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.048 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.309 "name": "raid_bdev1", 00:18:55.309 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:55.309 "strip_size_kb": 0, 00:18:55.309 "state": "online", 00:18:55.309 "raid_level": "raid1", 00:18:55.309 "superblock": true, 00:18:55.309 "num_base_bdevs": 2, 00:18:55.309 "num_base_bdevs_discovered": 1, 00:18:55.309 "num_base_bdevs_operational": 1, 00:18:55.309 "base_bdevs_list": [ 00:18:55.309 { 00:18:55.309 "name": null, 00:18:55.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.309 "is_configured": false, 00:18:55.309 "data_offset": 0, 00:18:55.309 "data_size": 7936 00:18:55.309 }, 00:18:55.309 { 00:18:55.309 "name": "BaseBdev2", 00:18:55.309 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:55.309 "is_configured": true, 00:18:55.309 "data_offset": 256, 00:18:55.309 "data_size": 7936 00:18:55.309 } 00:18:55.309 ] 00:18:55.309 }' 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 [2024-11-20 10:41:58.693195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.309 [2024-11-20 10:41:58.693377] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.309 [2024-11-20 10:41:58.693394] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:55.309 request: 00:18:55.309 { 00:18:55.309 "base_bdev": "BaseBdev1", 00:18:55.309 "raid_bdev": "raid_bdev1", 00:18:55.309 "method": "bdev_raid_add_base_bdev", 00:18:55.309 "req_id": 1 00:18:55.309 } 00:18:55.309 Got JSON-RPC error response 00:18:55.309 response: 00:18:55.309 { 00:18:55.309 "code": -22, 00:18:55.309 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:55.309 } 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.309 10:41:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.255 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.514 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.514 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.514 "name": "raid_bdev1", 00:18:56.514 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:56.514 "strip_size_kb": 0, 00:18:56.514 "state": "online", 00:18:56.514 "raid_level": "raid1", 00:18:56.514 "superblock": true, 00:18:56.514 "num_base_bdevs": 2, 00:18:56.514 "num_base_bdevs_discovered": 1, 00:18:56.514 "num_base_bdevs_operational": 1, 00:18:56.514 "base_bdevs_list": [ 00:18:56.514 { 00:18:56.514 "name": null, 00:18:56.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.514 "is_configured": false, 00:18:56.514 "data_offset": 0, 00:18:56.514 "data_size": 7936 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "name": "BaseBdev2", 00:18:56.514 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:56.514 "is_configured": true, 00:18:56.514 "data_offset": 256, 00:18:56.514 "data_size": 7936 00:18:56.514 } 00:18:56.514 ] 00:18:56.514 }' 00:18:56.514 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.514 10:41:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.774 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.775 "name": "raid_bdev1", 00:18:56.775 "uuid": "94afbbe8-a41e-41c2-a225-2075b20b4c28", 00:18:56.775 "strip_size_kb": 0, 00:18:56.775 "state": "online", 00:18:56.775 "raid_level": "raid1", 00:18:56.775 "superblock": true, 00:18:56.775 "num_base_bdevs": 2, 00:18:56.775 "num_base_bdevs_discovered": 1, 00:18:56.775 "num_base_bdevs_operational": 1, 00:18:56.775 "base_bdevs_list": [ 00:18:56.775 { 00:18:56.775 "name": null, 00:18:56.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.775 "is_configured": false, 00:18:56.775 "data_offset": 0, 00:18:56.775 "data_size": 7936 00:18:56.775 }, 00:18:56.775 { 00:18:56.775 "name": "BaseBdev2", 00:18:56.775 "uuid": "49e8fc1c-c01c-547e-9a22-7ae123a46fa5", 00:18:56.775 "is_configured": true, 00:18:56.775 "data_offset": 256, 00:18:56.775 "data_size": 7936 00:18:56.775 } 00:18:56.775 ] 00:18:56.775 }' 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87914 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87914 ']' 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87914 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.775 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87914 00:18:57.034 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.034 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.034 killing process with pid 87914 00:18:57.034 Received shutdown signal, test time was about 60.000000 seconds 00:18:57.034 00:18:57.034 Latency(us) 00:18:57.034 [2024-11-20T10:42:00.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.034 [2024-11-20T10:42:00.513Z] =================================================================================================================== 00:18:57.034 [2024-11-20T10:42:00.513Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.034 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87914' 00:18:57.034 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87914 00:18:57.034 [2024-11-20 10:42:00.264131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.034 [2024-11-20 10:42:00.264250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.034 [2024-11-20 10:42:00.264298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.034 10:42:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87914 00:18:57.034 [2024-11-20 10:42:00.264309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:57.292 [2024-11-20 10:42:00.563559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.231 10:42:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:58.231 00:18:58.231 real 0m19.380s 00:18:58.231 user 0m25.343s 00:18:58.231 sys 0m2.464s 00:18:58.231 ************************************ 00:18:58.231 END TEST raid_rebuild_test_sb_md_separate 00:18:58.231 ************************************ 00:18:58.231 10:42:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.231 10:42:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.231 10:42:01 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:58.231 10:42:01 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:58.231 10:42:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:58.231 10:42:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.231 10:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.231 ************************************ 00:18:58.231 START TEST raid_state_function_test_sb_md_interleaved 00:18:58.231 ************************************ 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88599 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:58.231 Process raid pid: 88599 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88599' 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88599 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88599 ']' 00:18:58.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.231 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.490 [2024-11-20 10:42:01.742669] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:58.490 [2024-11-20 10:42:01.742795] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.490 [2024-11-20 10:42:01.914891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.750 [2024-11-20 10:42:02.029925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.009 [2024-11-20 10:42:02.225842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.009 [2024-11-20 10:42:02.225875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.269 [2024-11-20 10:42:02.554780] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.269 [2024-11-20 10:42:02.554832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.269 [2024-11-20 10:42:02.554843] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.269 [2024-11-20 10:42:02.554852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.269 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.269 "name": "Existed_Raid", 00:18:59.269 "uuid": "341b1944-6c8d-4e56-8c89-1160c5ff64c3", 00:18:59.269 "strip_size_kb": 0, 00:18:59.269 "state": "configuring", 00:18:59.269 "raid_level": "raid1", 00:18:59.269 "superblock": true, 00:18:59.269 "num_base_bdevs": 2, 00:18:59.269 "num_base_bdevs_discovered": 0, 00:18:59.269 "num_base_bdevs_operational": 2, 00:18:59.269 "base_bdevs_list": [ 00:18:59.269 { 00:18:59.269 "name": "BaseBdev1", 00:18:59.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.269 "is_configured": false, 00:18:59.269 "data_offset": 0, 00:18:59.270 "data_size": 0 00:18:59.270 }, 00:18:59.270 { 00:18:59.270 "name": "BaseBdev2", 00:18:59.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.270 "is_configured": false, 00:18:59.270 "data_offset": 0, 00:18:59.270 "data_size": 0 00:18:59.270 } 00:18:59.270 ] 00:18:59.270 }' 00:18:59.270 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.270 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.530 [2024-11-20 10:42:02.974012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.530 [2024-11-20 10:42:02.974047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.530 [2024-11-20 10:42:02.985979] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.530 [2024-11-20 10:42:02.986019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.530 [2024-11-20 10:42:02.986028] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.530 [2024-11-20 10:42:02.986055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.530 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.790 [2024-11-20 10:42:03.036289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.790 BaseBdev1 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.790 [ 00:18:59.790 { 00:18:59.790 "name": "BaseBdev1", 00:18:59.790 "aliases": [ 00:18:59.790 "b32959dd-45db-44e0-9c42-636e816d1c94" 00:18:59.790 ], 00:18:59.790 "product_name": "Malloc disk", 00:18:59.790 "block_size": 4128, 00:18:59.790 "num_blocks": 8192, 00:18:59.790 "uuid": "b32959dd-45db-44e0-9c42-636e816d1c94", 00:18:59.790 "md_size": 32, 00:18:59.790 "md_interleave": true, 00:18:59.790 "dif_type": 0, 00:18:59.790 "assigned_rate_limits": { 00:18:59.790 "rw_ios_per_sec": 0, 00:18:59.790 "rw_mbytes_per_sec": 0, 00:18:59.790 "r_mbytes_per_sec": 0, 00:18:59.790 "w_mbytes_per_sec": 0 00:18:59.790 }, 00:18:59.790 "claimed": true, 00:18:59.790 "claim_type": "exclusive_write", 00:18:59.790 "zoned": false, 00:18:59.790 "supported_io_types": { 00:18:59.790 "read": true, 00:18:59.790 "write": true, 00:18:59.790 "unmap": true, 00:18:59.790 "flush": true, 00:18:59.790 "reset": true, 00:18:59.790 "nvme_admin": false, 00:18:59.790 "nvme_io": false, 00:18:59.790 "nvme_io_md": false, 00:18:59.790 "write_zeroes": true, 00:18:59.790 "zcopy": true, 00:18:59.790 "get_zone_info": false, 00:18:59.790 "zone_management": false, 00:18:59.790 "zone_append": false, 00:18:59.790 "compare": false, 00:18:59.790 "compare_and_write": false, 00:18:59.790 "abort": true, 00:18:59.790 "seek_hole": false, 00:18:59.790 "seek_data": false, 00:18:59.790 "copy": true, 00:18:59.790 "nvme_iov_md": false 00:18:59.790 }, 00:18:59.790 "memory_domains": [ 00:18:59.790 { 00:18:59.790 "dma_device_id": "system", 00:18:59.790 "dma_device_type": 1 00:18:59.790 }, 00:18:59.790 { 00:18:59.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.790 "dma_device_type": 2 00:18:59.790 } 00:18:59.790 ], 00:18:59.790 "driver_specific": {} 00:18:59.790 } 00:18:59.790 ] 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.790 "name": "Existed_Raid", 00:18:59.790 "uuid": "34aba6fa-e699-4793-b8cb-de43656cb5d6", 00:18:59.790 "strip_size_kb": 0, 00:18:59.790 "state": "configuring", 00:18:59.790 "raid_level": "raid1", 00:18:59.790 "superblock": true, 00:18:59.790 "num_base_bdevs": 2, 00:18:59.790 "num_base_bdevs_discovered": 1, 00:18:59.790 "num_base_bdevs_operational": 2, 00:18:59.790 "base_bdevs_list": [ 00:18:59.790 { 00:18:59.790 "name": "BaseBdev1", 00:18:59.790 "uuid": "b32959dd-45db-44e0-9c42-636e816d1c94", 00:18:59.790 "is_configured": true, 00:18:59.790 "data_offset": 256, 00:18:59.790 "data_size": 7936 00:18:59.790 }, 00:18:59.790 { 00:18:59.790 "name": "BaseBdev2", 00:18:59.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.790 "is_configured": false, 00:18:59.790 "data_offset": 0, 00:18:59.790 "data_size": 0 00:18:59.790 } 00:18:59.790 ] 00:18:59.790 }' 00:18:59.790 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.791 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.050 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.051 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.051 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.051 [2024-11-20 10:42:03.511539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.051 [2024-11-20 10:42:03.511590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:00.051 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.051 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.051 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.051 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.051 [2024-11-20 10:42:03.523573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.051 [2024-11-20 10:42:03.525292] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.051 [2024-11-20 10:42:03.525336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.310 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.310 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:00.310 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.310 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:00.310 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.310 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.311 "name": "Existed_Raid", 00:19:00.311 "uuid": "e12468db-e494-424b-b4e2-0670dcdbafea", 00:19:00.311 "strip_size_kb": 0, 00:19:00.311 "state": "configuring", 00:19:00.311 "raid_level": "raid1", 00:19:00.311 "superblock": true, 00:19:00.311 "num_base_bdevs": 2, 00:19:00.311 "num_base_bdevs_discovered": 1, 00:19:00.311 "num_base_bdevs_operational": 2, 00:19:00.311 "base_bdevs_list": [ 00:19:00.311 { 00:19:00.311 "name": "BaseBdev1", 00:19:00.311 "uuid": "b32959dd-45db-44e0-9c42-636e816d1c94", 00:19:00.311 "is_configured": true, 00:19:00.311 "data_offset": 256, 00:19:00.311 "data_size": 7936 00:19:00.311 }, 00:19:00.311 { 00:19:00.311 "name": "BaseBdev2", 00:19:00.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.311 "is_configured": false, 00:19:00.311 "data_offset": 0, 00:19:00.311 "data_size": 0 00:19:00.311 } 00:19:00.311 ] 00:19:00.311 }' 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.311 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.571 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:00.571 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.571 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.571 [2024-11-20 10:42:03.926605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.572 [2024-11-20 10:42:03.926890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:00.572 [2024-11-20 10:42:03.926928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:00.572 [2024-11-20 10:42:03.927059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:00.572 [2024-11-20 10:42:03.927163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:00.572 [2024-11-20 10:42:03.927200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:00.572 [2024-11-20 10:42:03.927291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.572 BaseBdev2 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.572 [ 00:19:00.572 { 00:19:00.572 "name": "BaseBdev2", 00:19:00.572 "aliases": [ 00:19:00.572 "3018fc64-7c7f-4ab3-8f4f-c2114bba061b" 00:19:00.572 ], 00:19:00.572 "product_name": "Malloc disk", 00:19:00.572 "block_size": 4128, 00:19:00.572 "num_blocks": 8192, 00:19:00.572 "uuid": "3018fc64-7c7f-4ab3-8f4f-c2114bba061b", 00:19:00.572 "md_size": 32, 00:19:00.572 "md_interleave": true, 00:19:00.572 "dif_type": 0, 00:19:00.572 "assigned_rate_limits": { 00:19:00.572 "rw_ios_per_sec": 0, 00:19:00.572 "rw_mbytes_per_sec": 0, 00:19:00.572 "r_mbytes_per_sec": 0, 00:19:00.572 "w_mbytes_per_sec": 0 00:19:00.572 }, 00:19:00.572 "claimed": true, 00:19:00.572 "claim_type": "exclusive_write", 00:19:00.572 "zoned": false, 00:19:00.572 "supported_io_types": { 00:19:00.572 "read": true, 00:19:00.572 "write": true, 00:19:00.572 "unmap": true, 00:19:00.572 "flush": true, 00:19:00.572 "reset": true, 00:19:00.572 "nvme_admin": false, 00:19:00.572 "nvme_io": false, 00:19:00.572 "nvme_io_md": false, 00:19:00.572 "write_zeroes": true, 00:19:00.572 "zcopy": true, 00:19:00.572 "get_zone_info": false, 00:19:00.572 "zone_management": false, 00:19:00.572 "zone_append": false, 00:19:00.572 "compare": false, 00:19:00.572 "compare_and_write": false, 00:19:00.572 "abort": true, 00:19:00.572 "seek_hole": false, 00:19:00.572 "seek_data": false, 00:19:00.572 "copy": true, 00:19:00.572 "nvme_iov_md": false 00:19:00.572 }, 00:19:00.572 "memory_domains": [ 00:19:00.572 { 00:19:00.572 "dma_device_id": "system", 00:19:00.572 "dma_device_type": 1 00:19:00.572 }, 00:19:00.572 { 00:19:00.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.572 "dma_device_type": 2 00:19:00.572 } 00:19:00.572 ], 00:19:00.572 "driver_specific": {} 00:19:00.572 } 00:19:00.572 ] 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.572 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.572 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.572 "name": "Existed_Raid", 00:19:00.572 "uuid": "e12468db-e494-424b-b4e2-0670dcdbafea", 00:19:00.572 "strip_size_kb": 0, 00:19:00.572 "state": "online", 00:19:00.572 "raid_level": "raid1", 00:19:00.572 "superblock": true, 00:19:00.572 "num_base_bdevs": 2, 00:19:00.572 "num_base_bdevs_discovered": 2, 00:19:00.572 "num_base_bdevs_operational": 2, 00:19:00.572 "base_bdevs_list": [ 00:19:00.572 { 00:19:00.572 "name": "BaseBdev1", 00:19:00.572 "uuid": "b32959dd-45db-44e0-9c42-636e816d1c94", 00:19:00.572 "is_configured": true, 00:19:00.572 "data_offset": 256, 00:19:00.572 "data_size": 7936 00:19:00.572 }, 00:19:00.572 { 00:19:00.572 "name": "BaseBdev2", 00:19:00.572 "uuid": "3018fc64-7c7f-4ab3-8f4f-c2114bba061b", 00:19:00.572 "is_configured": true, 00:19:00.572 "data_offset": 256, 00:19:00.572 "data_size": 7936 00:19:00.572 } 00:19:00.572 ] 00:19:00.572 }' 00:19:00.572 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.572 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.142 [2024-11-20 10:42:04.442071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.142 "name": "Existed_Raid", 00:19:01.142 "aliases": [ 00:19:01.142 "e12468db-e494-424b-b4e2-0670dcdbafea" 00:19:01.142 ], 00:19:01.142 "product_name": "Raid Volume", 00:19:01.142 "block_size": 4128, 00:19:01.142 "num_blocks": 7936, 00:19:01.142 "uuid": "e12468db-e494-424b-b4e2-0670dcdbafea", 00:19:01.142 "md_size": 32, 00:19:01.142 "md_interleave": true, 00:19:01.142 "dif_type": 0, 00:19:01.142 "assigned_rate_limits": { 00:19:01.142 "rw_ios_per_sec": 0, 00:19:01.142 "rw_mbytes_per_sec": 0, 00:19:01.142 "r_mbytes_per_sec": 0, 00:19:01.142 "w_mbytes_per_sec": 0 00:19:01.142 }, 00:19:01.142 "claimed": false, 00:19:01.142 "zoned": false, 00:19:01.142 "supported_io_types": { 00:19:01.142 "read": true, 00:19:01.142 "write": true, 00:19:01.142 "unmap": false, 00:19:01.142 "flush": false, 00:19:01.142 "reset": true, 00:19:01.142 "nvme_admin": false, 00:19:01.142 "nvme_io": false, 00:19:01.142 "nvme_io_md": false, 00:19:01.142 "write_zeroes": true, 00:19:01.142 "zcopy": false, 00:19:01.142 "get_zone_info": false, 00:19:01.142 "zone_management": false, 00:19:01.142 "zone_append": false, 00:19:01.142 "compare": false, 00:19:01.142 "compare_and_write": false, 00:19:01.142 "abort": false, 00:19:01.142 "seek_hole": false, 00:19:01.142 "seek_data": false, 00:19:01.142 "copy": false, 00:19:01.142 "nvme_iov_md": false 00:19:01.142 }, 00:19:01.142 "memory_domains": [ 00:19:01.142 { 00:19:01.142 "dma_device_id": "system", 00:19:01.142 "dma_device_type": 1 00:19:01.142 }, 00:19:01.142 { 00:19:01.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.142 "dma_device_type": 2 00:19:01.142 }, 00:19:01.142 { 00:19:01.142 "dma_device_id": "system", 00:19:01.142 "dma_device_type": 1 00:19:01.142 }, 00:19:01.142 { 00:19:01.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.142 "dma_device_type": 2 00:19:01.142 } 00:19:01.142 ], 00:19:01.142 "driver_specific": { 00:19:01.142 "raid": { 00:19:01.142 "uuid": "e12468db-e494-424b-b4e2-0670dcdbafea", 00:19:01.142 "strip_size_kb": 0, 00:19:01.142 "state": "online", 00:19:01.142 "raid_level": "raid1", 00:19:01.142 "superblock": true, 00:19:01.142 "num_base_bdevs": 2, 00:19:01.142 "num_base_bdevs_discovered": 2, 00:19:01.142 "num_base_bdevs_operational": 2, 00:19:01.142 "base_bdevs_list": [ 00:19:01.142 { 00:19:01.142 "name": "BaseBdev1", 00:19:01.142 "uuid": "b32959dd-45db-44e0-9c42-636e816d1c94", 00:19:01.142 "is_configured": true, 00:19:01.142 "data_offset": 256, 00:19:01.142 "data_size": 7936 00:19:01.142 }, 00:19:01.142 { 00:19:01.142 "name": "BaseBdev2", 00:19:01.142 "uuid": "3018fc64-7c7f-4ab3-8f4f-c2114bba061b", 00:19:01.142 "is_configured": true, 00:19:01.142 "data_offset": 256, 00:19:01.142 "data_size": 7936 00:19:01.142 } 00:19:01.142 ] 00:19:01.142 } 00:19:01.142 } 00:19:01.142 }' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:01.142 BaseBdev2' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.142 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.403 [2024-11-20 10:42:04.653483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.403 "name": "Existed_Raid", 00:19:01.403 "uuid": "e12468db-e494-424b-b4e2-0670dcdbafea", 00:19:01.403 "strip_size_kb": 0, 00:19:01.403 "state": "online", 00:19:01.403 "raid_level": "raid1", 00:19:01.403 "superblock": true, 00:19:01.403 "num_base_bdevs": 2, 00:19:01.403 "num_base_bdevs_discovered": 1, 00:19:01.403 "num_base_bdevs_operational": 1, 00:19:01.403 "base_bdevs_list": [ 00:19:01.403 { 00:19:01.403 "name": null, 00:19:01.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.403 "is_configured": false, 00:19:01.403 "data_offset": 0, 00:19:01.403 "data_size": 7936 00:19:01.403 }, 00:19:01.403 { 00:19:01.403 "name": "BaseBdev2", 00:19:01.403 "uuid": "3018fc64-7c7f-4ab3-8f4f-c2114bba061b", 00:19:01.403 "is_configured": true, 00:19:01.403 "data_offset": 256, 00:19:01.403 "data_size": 7936 00:19:01.403 } 00:19:01.403 ] 00:19:01.403 }' 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.403 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.663 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:01.663 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.663 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:01.663 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.663 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.663 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.923 [2024-11-20 10:42:05.177578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:01.923 [2024-11-20 10:42:05.177728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.923 [2024-11-20 10:42:05.267747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.923 [2024-11-20 10:42:05.267794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.923 [2024-11-20 10:42:05.267805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88599 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88599 ']' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88599 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88599 00:19:01.923 killing process with pid 88599 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88599' 00:19:01.923 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88599 00:19:01.923 [2024-11-20 10:42:05.360896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.924 10:42:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88599 00:19:01.924 [2024-11-20 10:42:05.377698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.305 ************************************ 00:19:03.305 END TEST raid_state_function_test_sb_md_interleaved 00:19:03.305 ************************************ 00:19:03.305 10:42:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:03.305 00:19:03.305 real 0m4.759s 00:19:03.305 user 0m6.814s 00:19:03.305 sys 0m0.805s 00:19:03.305 10:42:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.305 10:42:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.305 10:42:06 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:03.305 10:42:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:03.305 10:42:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.305 10:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.305 ************************************ 00:19:03.305 START TEST raid_superblock_test_md_interleaved 00:19:03.305 ************************************ 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88841 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88841 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88841 ']' 00:19:03.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.305 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.305 [2024-11-20 10:42:06.566985] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:03.305 [2024-11-20 10:42:06.567161] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88841 ] 00:19:03.305 [2024-11-20 10:42:06.719123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.565 [2024-11-20 10:42:06.822374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.565 [2024-11-20 10:42:07.006245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.565 [2024-11-20 10:42:07.006408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 malloc1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 [2024-11-20 10:42:07.435494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.136 [2024-11-20 10:42:07.435617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.136 [2024-11-20 10:42:07.435661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:04.136 [2024-11-20 10:42:07.435690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.136 [2024-11-20 10:42:07.437535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.136 [2024-11-20 10:42:07.437615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.136 pt1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 malloc2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 [2024-11-20 10:42:07.492107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.136 [2024-11-20 10:42:07.492228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.136 [2024-11-20 10:42:07.492269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:04.136 [2024-11-20 10:42:07.492310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.136 [2024-11-20 10:42:07.494084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.136 [2024-11-20 10:42:07.494120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.136 pt2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 [2024-11-20 10:42:07.504128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.136 [2024-11-20 10:42:07.505934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.136 [2024-11-20 10:42:07.506101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:04.136 [2024-11-20 10:42:07.506113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:04.136 [2024-11-20 10:42:07.506177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:04.136 [2024-11-20 10:42:07.506239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:04.136 [2024-11-20 10:42:07.506249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:04.136 [2024-11-20 10:42:07.506310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.136 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.136 "name": "raid_bdev1", 00:19:04.136 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:04.136 "strip_size_kb": 0, 00:19:04.136 "state": "online", 00:19:04.136 "raid_level": "raid1", 00:19:04.136 "superblock": true, 00:19:04.136 "num_base_bdevs": 2, 00:19:04.136 "num_base_bdevs_discovered": 2, 00:19:04.136 "num_base_bdevs_operational": 2, 00:19:04.137 "base_bdevs_list": [ 00:19:04.137 { 00:19:04.137 "name": "pt1", 00:19:04.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.137 "is_configured": true, 00:19:04.137 "data_offset": 256, 00:19:04.137 "data_size": 7936 00:19:04.137 }, 00:19:04.137 { 00:19:04.137 "name": "pt2", 00:19:04.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.137 "is_configured": true, 00:19:04.137 "data_offset": 256, 00:19:04.137 "data_size": 7936 00:19:04.137 } 00:19:04.137 ] 00:19:04.137 }' 00:19:04.137 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.137 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.706 [2024-11-20 10:42:07.927669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.706 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.706 "name": "raid_bdev1", 00:19:04.706 "aliases": [ 00:19:04.706 "f149d403-d59e-4b24-b0d1-168b7746a129" 00:19:04.706 ], 00:19:04.706 "product_name": "Raid Volume", 00:19:04.706 "block_size": 4128, 00:19:04.706 "num_blocks": 7936, 00:19:04.706 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:04.706 "md_size": 32, 00:19:04.706 "md_interleave": true, 00:19:04.706 "dif_type": 0, 00:19:04.706 "assigned_rate_limits": { 00:19:04.706 "rw_ios_per_sec": 0, 00:19:04.706 "rw_mbytes_per_sec": 0, 00:19:04.706 "r_mbytes_per_sec": 0, 00:19:04.706 "w_mbytes_per_sec": 0 00:19:04.706 }, 00:19:04.706 "claimed": false, 00:19:04.706 "zoned": false, 00:19:04.706 "supported_io_types": { 00:19:04.706 "read": true, 00:19:04.706 "write": true, 00:19:04.706 "unmap": false, 00:19:04.706 "flush": false, 00:19:04.706 "reset": true, 00:19:04.706 "nvme_admin": false, 00:19:04.706 "nvme_io": false, 00:19:04.706 "nvme_io_md": false, 00:19:04.706 "write_zeroes": true, 00:19:04.706 "zcopy": false, 00:19:04.706 "get_zone_info": false, 00:19:04.706 "zone_management": false, 00:19:04.706 "zone_append": false, 00:19:04.706 "compare": false, 00:19:04.706 "compare_and_write": false, 00:19:04.706 "abort": false, 00:19:04.706 "seek_hole": false, 00:19:04.706 "seek_data": false, 00:19:04.706 "copy": false, 00:19:04.706 "nvme_iov_md": false 00:19:04.706 }, 00:19:04.706 "memory_domains": [ 00:19:04.706 { 00:19:04.706 "dma_device_id": "system", 00:19:04.706 "dma_device_type": 1 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.706 "dma_device_type": 2 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "dma_device_id": "system", 00:19:04.706 "dma_device_type": 1 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.706 "dma_device_type": 2 00:19:04.706 } 00:19:04.706 ], 00:19:04.706 "driver_specific": { 00:19:04.706 "raid": { 00:19:04.707 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:04.707 "strip_size_kb": 0, 00:19:04.707 "state": "online", 00:19:04.707 "raid_level": "raid1", 00:19:04.707 "superblock": true, 00:19:04.707 "num_base_bdevs": 2, 00:19:04.707 "num_base_bdevs_discovered": 2, 00:19:04.707 "num_base_bdevs_operational": 2, 00:19:04.707 "base_bdevs_list": [ 00:19:04.707 { 00:19:04.707 "name": "pt1", 00:19:04.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.707 "is_configured": true, 00:19:04.707 "data_offset": 256, 00:19:04.707 "data_size": 7936 00:19:04.707 }, 00:19:04.707 { 00:19:04.707 "name": "pt2", 00:19:04.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.707 "is_configured": true, 00:19:04.707 "data_offset": 256, 00:19:04.707 "data_size": 7936 00:19:04.707 } 00:19:04.707 ] 00:19:04.707 } 00:19:04.707 } 00:19:04.707 }' 00:19:04.707 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.707 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:04.707 pt2' 00:19:04.707 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:04.707 [2024-11-20 10:42:08.135218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f149d403-d59e-4b24-b0d1-168b7746a129 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f149d403-d59e-4b24-b0d1-168b7746a129 ']' 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.707 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-20 10:42:08.182882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.968 [2024-11-20 10:42:08.182903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.968 [2024-11-20 10:42:08.182978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.968 [2024-11-20 10:42:08.183031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.968 [2024-11-20 10:42:08.183042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-20 10:42:08.318671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:04.968 [2024-11-20 10:42:08.320509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:04.968 [2024-11-20 10:42:08.320626] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:04.968 [2024-11-20 10:42:08.320716] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:04.968 [2024-11-20 10:42:08.320763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.968 [2024-11-20 10:42:08.320799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:04.968 request: 00:19:04.968 { 00:19:04.968 "name": "raid_bdev1", 00:19:04.968 "raid_level": "raid1", 00:19:04.968 "base_bdevs": [ 00:19:04.968 "malloc1", 00:19:04.968 "malloc2" 00:19:04.968 ], 00:19:04.968 "superblock": false, 00:19:04.968 "method": "bdev_raid_create", 00:19:04.968 "req_id": 1 00:19:04.968 } 00:19:04.968 Got JSON-RPC error response 00:19:04.968 response: 00:19:04.968 { 00:19:04.968 "code": -17, 00:19:04.968 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:04.968 } 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-20 10:42:08.386531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.968 [2024-11-20 10:42:08.386631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.968 [2024-11-20 10:42:08.386665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:04.968 [2024-11-20 10:42:08.386693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.968 [2024-11-20 10:42:08.388489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.968 [2024-11-20 10:42:08.388560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.968 [2024-11-20 10:42:08.388623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:04.968 [2024-11-20 10:42:08.388695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.968 pt1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.968 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.968 "name": "raid_bdev1", 00:19:04.968 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:04.968 "strip_size_kb": 0, 00:19:04.968 "state": "configuring", 00:19:04.968 "raid_level": "raid1", 00:19:04.968 "superblock": true, 00:19:04.968 "num_base_bdevs": 2, 00:19:04.968 "num_base_bdevs_discovered": 1, 00:19:04.968 "num_base_bdevs_operational": 2, 00:19:04.968 "base_bdevs_list": [ 00:19:04.968 { 00:19:04.968 "name": "pt1", 00:19:04.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.968 "is_configured": true, 00:19:04.968 "data_offset": 256, 00:19:04.968 "data_size": 7936 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "name": null, 00:19:04.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.968 "is_configured": false, 00:19:04.969 "data_offset": 256, 00:19:04.969 "data_size": 7936 00:19:04.969 } 00:19:04.969 ] 00:19:04.969 }' 00:19:04.969 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.969 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.560 [2024-11-20 10:42:08.857763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.560 [2024-11-20 10:42:08.857877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.560 [2024-11-20 10:42:08.857904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:05.560 [2024-11-20 10:42:08.857914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.560 [2024-11-20 10:42:08.858076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.560 [2024-11-20 10:42:08.858090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.560 [2024-11-20 10:42:08.858136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.560 [2024-11-20 10:42:08.858160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.560 [2024-11-20 10:42:08.858245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.560 [2024-11-20 10:42:08.858256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.560 [2024-11-20 10:42:08.858326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:05.560 [2024-11-20 10:42:08.858420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.560 [2024-11-20 10:42:08.858435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:05.560 [2024-11-20 10:42:08.858497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.560 pt2 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.560 "name": "raid_bdev1", 00:19:05.560 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:05.560 "strip_size_kb": 0, 00:19:05.560 "state": "online", 00:19:05.560 "raid_level": "raid1", 00:19:05.560 "superblock": true, 00:19:05.560 "num_base_bdevs": 2, 00:19:05.560 "num_base_bdevs_discovered": 2, 00:19:05.560 "num_base_bdevs_operational": 2, 00:19:05.560 "base_bdevs_list": [ 00:19:05.560 { 00:19:05.560 "name": "pt1", 00:19:05.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.560 "is_configured": true, 00:19:05.560 "data_offset": 256, 00:19:05.560 "data_size": 7936 00:19:05.560 }, 00:19:05.560 { 00:19:05.560 "name": "pt2", 00:19:05.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.560 "is_configured": true, 00:19:05.560 "data_offset": 256, 00:19:05.560 "data_size": 7936 00:19:05.560 } 00:19:05.560 ] 00:19:05.560 }' 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.560 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.822 [2024-11-20 10:42:09.257344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.822 "name": "raid_bdev1", 00:19:05.822 "aliases": [ 00:19:05.822 "f149d403-d59e-4b24-b0d1-168b7746a129" 00:19:05.822 ], 00:19:05.822 "product_name": "Raid Volume", 00:19:05.822 "block_size": 4128, 00:19:05.822 "num_blocks": 7936, 00:19:05.822 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:05.822 "md_size": 32, 00:19:05.822 "md_interleave": true, 00:19:05.822 "dif_type": 0, 00:19:05.822 "assigned_rate_limits": { 00:19:05.822 "rw_ios_per_sec": 0, 00:19:05.822 "rw_mbytes_per_sec": 0, 00:19:05.822 "r_mbytes_per_sec": 0, 00:19:05.822 "w_mbytes_per_sec": 0 00:19:05.822 }, 00:19:05.822 "claimed": false, 00:19:05.822 "zoned": false, 00:19:05.822 "supported_io_types": { 00:19:05.822 "read": true, 00:19:05.822 "write": true, 00:19:05.822 "unmap": false, 00:19:05.822 "flush": false, 00:19:05.822 "reset": true, 00:19:05.822 "nvme_admin": false, 00:19:05.822 "nvme_io": false, 00:19:05.822 "nvme_io_md": false, 00:19:05.822 "write_zeroes": true, 00:19:05.822 "zcopy": false, 00:19:05.822 "get_zone_info": false, 00:19:05.822 "zone_management": false, 00:19:05.822 "zone_append": false, 00:19:05.822 "compare": false, 00:19:05.822 "compare_and_write": false, 00:19:05.822 "abort": false, 00:19:05.822 "seek_hole": false, 00:19:05.822 "seek_data": false, 00:19:05.822 "copy": false, 00:19:05.822 "nvme_iov_md": false 00:19:05.822 }, 00:19:05.822 "memory_domains": [ 00:19:05.822 { 00:19:05.822 "dma_device_id": "system", 00:19:05.822 "dma_device_type": 1 00:19:05.822 }, 00:19:05.822 { 00:19:05.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.822 "dma_device_type": 2 00:19:05.822 }, 00:19:05.822 { 00:19:05.822 "dma_device_id": "system", 00:19:05.822 "dma_device_type": 1 00:19:05.822 }, 00:19:05.822 { 00:19:05.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.822 "dma_device_type": 2 00:19:05.822 } 00:19:05.822 ], 00:19:05.822 "driver_specific": { 00:19:05.822 "raid": { 00:19:05.822 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:05.822 "strip_size_kb": 0, 00:19:05.822 "state": "online", 00:19:05.822 "raid_level": "raid1", 00:19:05.822 "superblock": true, 00:19:05.822 "num_base_bdevs": 2, 00:19:05.822 "num_base_bdevs_discovered": 2, 00:19:05.822 "num_base_bdevs_operational": 2, 00:19:05.822 "base_bdevs_list": [ 00:19:05.822 { 00:19:05.822 "name": "pt1", 00:19:05.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.822 "is_configured": true, 00:19:05.822 "data_offset": 256, 00:19:05.822 "data_size": 7936 00:19:05.822 }, 00:19:05.822 { 00:19:05.822 "name": "pt2", 00:19:05.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.822 "is_configured": true, 00:19:05.822 "data_offset": 256, 00:19:05.822 "data_size": 7936 00:19:05.822 } 00:19:05.822 ] 00:19:05.822 } 00:19:05.822 } 00:19:05.822 }' 00:19:05.822 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:06.082 pt2' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:06.082 [2024-11-20 10:42:09.472940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f149d403-d59e-4b24-b0d1-168b7746a129 '!=' f149d403-d59e-4b24-b0d1-168b7746a129 ']' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.082 [2024-11-20 10:42:09.512664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.082 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.341 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.341 "name": "raid_bdev1", 00:19:06.341 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:06.341 "strip_size_kb": 0, 00:19:06.341 "state": "online", 00:19:06.341 "raid_level": "raid1", 00:19:06.341 "superblock": true, 00:19:06.341 "num_base_bdevs": 2, 00:19:06.341 "num_base_bdevs_discovered": 1, 00:19:06.341 "num_base_bdevs_operational": 1, 00:19:06.341 "base_bdevs_list": [ 00:19:06.341 { 00:19:06.341 "name": null, 00:19:06.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.341 "is_configured": false, 00:19:06.341 "data_offset": 0, 00:19:06.341 "data_size": 7936 00:19:06.341 }, 00:19:06.341 { 00:19:06.341 "name": "pt2", 00:19:06.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.341 "is_configured": true, 00:19:06.341 "data_offset": 256, 00:19:06.341 "data_size": 7936 00:19:06.341 } 00:19:06.341 ] 00:19:06.341 }' 00:19:06.341 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.341 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.600 [2024-11-20 10:42:09.967878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.600 [2024-11-20 10:42:09.967945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.600 [2024-11-20 10:42:09.968037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.600 [2024-11-20 10:42:09.968098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.600 [2024-11-20 10:42:09.968171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.600 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.600 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.600 [2024-11-20 10:42:10.039772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.600 [2024-11-20 10:42:10.039823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.600 [2024-11-20 10:42:10.039840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:06.600 [2024-11-20 10:42:10.039850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.600 [2024-11-20 10:42:10.041657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.601 [2024-11-20 10:42:10.041694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.601 [2024-11-20 10:42:10.041743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:06.601 [2024-11-20 10:42:10.041794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.601 [2024-11-20 10:42:10.041857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:06.601 [2024-11-20 10:42:10.041868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:06.601 [2024-11-20 10:42:10.041952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:06.601 [2024-11-20 10:42:10.042016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:06.601 [2024-11-20 10:42:10.042023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:06.601 [2024-11-20 10:42:10.042081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.601 pt2 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.601 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.861 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.861 "name": "raid_bdev1", 00:19:06.861 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:06.861 "strip_size_kb": 0, 00:19:06.861 "state": "online", 00:19:06.861 "raid_level": "raid1", 00:19:06.861 "superblock": true, 00:19:06.861 "num_base_bdevs": 2, 00:19:06.861 "num_base_bdevs_discovered": 1, 00:19:06.861 "num_base_bdevs_operational": 1, 00:19:06.861 "base_bdevs_list": [ 00:19:06.861 { 00:19:06.861 "name": null, 00:19:06.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.861 "is_configured": false, 00:19:06.861 "data_offset": 256, 00:19:06.861 "data_size": 7936 00:19:06.861 }, 00:19:06.861 { 00:19:06.861 "name": "pt2", 00:19:06.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.861 "is_configured": true, 00:19:06.861 "data_offset": 256, 00:19:06.861 "data_size": 7936 00:19:06.861 } 00:19:06.861 ] 00:19:06.861 }' 00:19:06.861 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.861 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.121 [2024-11-20 10:42:10.455021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.121 [2024-11-20 10:42:10.455098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.121 [2024-11-20 10:42:10.455191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.121 [2024-11-20 10:42:10.455255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.121 [2024-11-20 10:42:10.455314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.121 [2024-11-20 10:42:10.494974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.121 [2024-11-20 10:42:10.495083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.121 [2024-11-20 10:42:10.495123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:07.121 [2024-11-20 10:42:10.495152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.121 [2024-11-20 10:42:10.496969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.121 [2024-11-20 10:42:10.497039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.121 [2024-11-20 10:42:10.497110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:07.121 [2024-11-20 10:42:10.497170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:07.121 [2024-11-20 10:42:10.497277] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:07.121 [2024-11-20 10:42:10.497327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.121 [2024-11-20 10:42:10.497382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:07.121 [2024-11-20 10:42:10.497485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.121 [2024-11-20 10:42:10.497578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:07.121 [2024-11-20 10:42:10.497613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:07.121 [2024-11-20 10:42:10.497690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:07.121 [2024-11-20 10:42:10.497780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:07.121 [2024-11-20 10:42:10.497818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:07.121 [2024-11-20 10:42:10.497918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.121 pt1 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.121 "name": "raid_bdev1", 00:19:07.121 "uuid": "f149d403-d59e-4b24-b0d1-168b7746a129", 00:19:07.121 "strip_size_kb": 0, 00:19:07.121 "state": "online", 00:19:07.121 "raid_level": "raid1", 00:19:07.121 "superblock": true, 00:19:07.121 "num_base_bdevs": 2, 00:19:07.121 "num_base_bdevs_discovered": 1, 00:19:07.121 "num_base_bdevs_operational": 1, 00:19:07.121 "base_bdevs_list": [ 00:19:07.121 { 00:19:07.121 "name": null, 00:19:07.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.121 "is_configured": false, 00:19:07.121 "data_offset": 256, 00:19:07.121 "data_size": 7936 00:19:07.121 }, 00:19:07.121 { 00:19:07.121 "name": "pt2", 00:19:07.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.121 "is_configured": true, 00:19:07.121 "data_offset": 256, 00:19:07.121 "data_size": 7936 00:19:07.121 } 00:19:07.121 ] 00:19:07.121 }' 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.121 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 [2024-11-20 10:42:10.894529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f149d403-d59e-4b24-b0d1-168b7746a129 '!=' f149d403-d59e-4b24-b0d1-168b7746a129 ']' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88841 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88841 ']' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88841 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88841 00:19:07.692 killing process with pid 88841 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88841' 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88841 00:19:07.692 [2024-11-20 10:42:10.975015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.692 [2024-11-20 10:42:10.975093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.692 [2024-11-20 10:42:10.975134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.692 [2024-11-20 10:42:10.975149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:07.692 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88841 00:19:07.952 [2024-11-20 10:42:11.172755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.908 ************************************ 00:19:08.908 END TEST raid_superblock_test_md_interleaved 00:19:08.908 ************************************ 00:19:08.908 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:08.908 00:19:08.908 real 0m5.709s 00:19:08.908 user 0m8.669s 00:19:08.908 sys 0m0.974s 00:19:08.908 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.908 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.908 10:42:12 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:08.908 10:42:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:08.908 10:42:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.908 10:42:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.908 ************************************ 00:19:08.908 START TEST raid_rebuild_test_sb_md_interleaved 00:19:08.908 ************************************ 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89167 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89167 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89167 ']' 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.908 10:42:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.908 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:08.908 Zero copy mechanism will not be used. 00:19:08.908 [2024-11-20 10:42:12.353593] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:08.908 [2024-11-20 10:42:12.353718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89167 ] 00:19:09.167 [2024-11-20 10:42:12.524166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.167 [2024-11-20 10:42:12.632246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.427 [2024-11-20 10:42:12.805193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.427 [2024-11-20 10:42:12.805326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.686 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.686 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:09.686 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:09.686 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:09.686 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.686 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 BaseBdev1_malloc 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 [2024-11-20 10:42:13.206753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:09.946 [2024-11-20 10:42:13.206813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.946 [2024-11-20 10:42:13.206849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:09.946 [2024-11-20 10:42:13.206859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.946 [2024-11-20 10:42:13.208635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.946 [2024-11-20 10:42:13.208737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:09.946 BaseBdev1 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 BaseBdev2_malloc 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.946 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 [2024-11-20 10:42:13.260112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:09.947 [2024-11-20 10:42:13.260233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.947 [2024-11-20 10:42:13.260271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:09.947 [2024-11-20 10:42:13.260303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.947 [2024-11-20 10:42:13.262042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.947 [2024-11-20 10:42:13.262081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:09.947 BaseBdev2 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.947 spare_malloc 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.947 spare_delay 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.947 [2024-11-20 10:42:13.362150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:09.947 [2024-11-20 10:42:13.362264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.947 [2024-11-20 10:42:13.362304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:09.947 [2024-11-20 10:42:13.362339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.947 [2024-11-20 10:42:13.364201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.947 [2024-11-20 10:42:13.364279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:09.947 spare 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.947 [2024-11-20 10:42:13.374164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.947 [2024-11-20 10:42:13.375913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.947 [2024-11-20 10:42:13.376103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.947 [2024-11-20 10:42:13.376117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:09.947 [2024-11-20 10:42:13.376190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:09.947 [2024-11-20 10:42:13.376259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.947 [2024-11-20 10:42:13.376266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:09.947 [2024-11-20 10:42:13.376330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.947 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.207 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.207 "name": "raid_bdev1", 00:19:10.207 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:10.207 "strip_size_kb": 0, 00:19:10.207 "state": "online", 00:19:10.207 "raid_level": "raid1", 00:19:10.207 "superblock": true, 00:19:10.207 "num_base_bdevs": 2, 00:19:10.207 "num_base_bdevs_discovered": 2, 00:19:10.207 "num_base_bdevs_operational": 2, 00:19:10.207 "base_bdevs_list": [ 00:19:10.207 { 00:19:10.207 "name": "BaseBdev1", 00:19:10.207 "uuid": "01561d09-da31-53e5-985a-ce36a4cd4d9c", 00:19:10.207 "is_configured": true, 00:19:10.207 "data_offset": 256, 00:19:10.207 "data_size": 7936 00:19:10.207 }, 00:19:10.207 { 00:19:10.207 "name": "BaseBdev2", 00:19:10.207 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:10.207 "is_configured": true, 00:19:10.207 "data_offset": 256, 00:19:10.207 "data_size": 7936 00:19:10.207 } 00:19:10.207 ] 00:19:10.207 }' 00:19:10.207 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.207 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.467 [2024-11-20 10:42:13.833669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:10.467 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.468 [2024-11-20 10:42:13.889276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.468 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.727 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.727 "name": "raid_bdev1", 00:19:10.727 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:10.727 "strip_size_kb": 0, 00:19:10.727 "state": "online", 00:19:10.727 "raid_level": "raid1", 00:19:10.727 "superblock": true, 00:19:10.727 "num_base_bdevs": 2, 00:19:10.727 "num_base_bdevs_discovered": 1, 00:19:10.727 "num_base_bdevs_operational": 1, 00:19:10.727 "base_bdevs_list": [ 00:19:10.727 { 00:19:10.727 "name": null, 00:19:10.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.727 "is_configured": false, 00:19:10.727 "data_offset": 0, 00:19:10.727 "data_size": 7936 00:19:10.727 }, 00:19:10.727 { 00:19:10.727 "name": "BaseBdev2", 00:19:10.727 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:10.727 "is_configured": true, 00:19:10.727 "data_offset": 256, 00:19:10.727 "data_size": 7936 00:19:10.728 } 00:19:10.728 ] 00:19:10.728 }' 00:19:10.728 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.728 10:42:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.988 10:42:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.988 10:42:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.988 10:42:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.988 [2024-11-20 10:42:14.272623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.988 [2024-11-20 10:42:14.288110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:10.988 10:42:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.988 10:42:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:10.988 [2024-11-20 10:42:14.289913] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.927 "name": "raid_bdev1", 00:19:11.927 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:11.927 "strip_size_kb": 0, 00:19:11.927 "state": "online", 00:19:11.927 "raid_level": "raid1", 00:19:11.927 "superblock": true, 00:19:11.927 "num_base_bdevs": 2, 00:19:11.927 "num_base_bdevs_discovered": 2, 00:19:11.927 "num_base_bdevs_operational": 2, 00:19:11.927 "process": { 00:19:11.927 "type": "rebuild", 00:19:11.927 "target": "spare", 00:19:11.927 "progress": { 00:19:11.927 "blocks": 2560, 00:19:11.927 "percent": 32 00:19:11.927 } 00:19:11.927 }, 00:19:11.927 "base_bdevs_list": [ 00:19:11.927 { 00:19:11.927 "name": "spare", 00:19:11.927 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:11.927 "is_configured": true, 00:19:11.927 "data_offset": 256, 00:19:11.927 "data_size": 7936 00:19:11.927 }, 00:19:11.927 { 00:19:11.927 "name": "BaseBdev2", 00:19:11.927 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:11.927 "is_configured": true, 00:19:11.927 "data_offset": 256, 00:19:11.927 "data_size": 7936 00:19:11.927 } 00:19:11.927 ] 00:19:11.927 }' 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.927 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 [2024-11-20 10:42:15.453681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.187 [2024-11-20 10:42:15.494639] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:12.187 [2024-11-20 10:42:15.494754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.187 [2024-11-20 10:42:15.494770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.187 [2024-11-20 10:42:15.494782] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.188 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.188 "name": "raid_bdev1", 00:19:12.188 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:12.188 "strip_size_kb": 0, 00:19:12.188 "state": "online", 00:19:12.188 "raid_level": "raid1", 00:19:12.188 "superblock": true, 00:19:12.188 "num_base_bdevs": 2, 00:19:12.188 "num_base_bdevs_discovered": 1, 00:19:12.188 "num_base_bdevs_operational": 1, 00:19:12.188 "base_bdevs_list": [ 00:19:12.188 { 00:19:12.188 "name": null, 00:19:12.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.188 "is_configured": false, 00:19:12.188 "data_offset": 0, 00:19:12.188 "data_size": 7936 00:19:12.188 }, 00:19:12.188 { 00:19:12.188 "name": "BaseBdev2", 00:19:12.188 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:12.188 "is_configured": true, 00:19:12.188 "data_offset": 256, 00:19:12.188 "data_size": 7936 00:19:12.188 } 00:19:12.188 ] 00:19:12.188 }' 00:19:12.188 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.188 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.757 10:42:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.757 "name": "raid_bdev1", 00:19:12.757 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:12.757 "strip_size_kb": 0, 00:19:12.757 "state": "online", 00:19:12.757 "raid_level": "raid1", 00:19:12.757 "superblock": true, 00:19:12.757 "num_base_bdevs": 2, 00:19:12.757 "num_base_bdevs_discovered": 1, 00:19:12.757 "num_base_bdevs_operational": 1, 00:19:12.757 "base_bdevs_list": [ 00:19:12.757 { 00:19:12.757 "name": null, 00:19:12.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.757 "is_configured": false, 00:19:12.757 "data_offset": 0, 00:19:12.757 "data_size": 7936 00:19:12.757 }, 00:19:12.757 { 00:19:12.757 "name": "BaseBdev2", 00:19:12.757 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:12.757 "is_configured": true, 00:19:12.757 "data_offset": 256, 00:19:12.757 "data_size": 7936 00:19:12.757 } 00:19:12.757 ] 00:19:12.757 }' 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.757 [2024-11-20 10:42:16.130006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.757 [2024-11-20 10:42:16.145403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.757 10:42:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:12.757 [2024-11-20 10:42:16.147184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.695 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.954 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.954 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.954 "name": "raid_bdev1", 00:19:13.954 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:13.954 "strip_size_kb": 0, 00:19:13.954 "state": "online", 00:19:13.954 "raid_level": "raid1", 00:19:13.954 "superblock": true, 00:19:13.954 "num_base_bdevs": 2, 00:19:13.954 "num_base_bdevs_discovered": 2, 00:19:13.954 "num_base_bdevs_operational": 2, 00:19:13.954 "process": { 00:19:13.954 "type": "rebuild", 00:19:13.954 "target": "spare", 00:19:13.954 "progress": { 00:19:13.954 "blocks": 2560, 00:19:13.954 "percent": 32 00:19:13.954 } 00:19:13.954 }, 00:19:13.954 "base_bdevs_list": [ 00:19:13.954 { 00:19:13.955 "name": "spare", 00:19:13.955 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:13.955 "is_configured": true, 00:19:13.955 "data_offset": 256, 00:19:13.955 "data_size": 7936 00:19:13.955 }, 00:19:13.955 { 00:19:13.955 "name": "BaseBdev2", 00:19:13.955 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:13.955 "is_configured": true, 00:19:13.955 "data_offset": 256, 00:19:13.955 "data_size": 7936 00:19:13.955 } 00:19:13.955 ] 00:19:13.955 }' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:13.955 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=742 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.955 "name": "raid_bdev1", 00:19:13.955 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:13.955 "strip_size_kb": 0, 00:19:13.955 "state": "online", 00:19:13.955 "raid_level": "raid1", 00:19:13.955 "superblock": true, 00:19:13.955 "num_base_bdevs": 2, 00:19:13.955 "num_base_bdevs_discovered": 2, 00:19:13.955 "num_base_bdevs_operational": 2, 00:19:13.955 "process": { 00:19:13.955 "type": "rebuild", 00:19:13.955 "target": "spare", 00:19:13.955 "progress": { 00:19:13.955 "blocks": 2816, 00:19:13.955 "percent": 35 00:19:13.955 } 00:19:13.955 }, 00:19:13.955 "base_bdevs_list": [ 00:19:13.955 { 00:19:13.955 "name": "spare", 00:19:13.955 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:13.955 "is_configured": true, 00:19:13.955 "data_offset": 256, 00:19:13.955 "data_size": 7936 00:19:13.955 }, 00:19:13.955 { 00:19:13.955 "name": "BaseBdev2", 00:19:13.955 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:13.955 "is_configured": true, 00:19:13.955 "data_offset": 256, 00:19:13.955 "data_size": 7936 00:19:13.955 } 00:19:13.955 ] 00:19:13.955 }' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.955 10:42:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.342 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.343 "name": "raid_bdev1", 00:19:15.343 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:15.343 "strip_size_kb": 0, 00:19:15.343 "state": "online", 00:19:15.343 "raid_level": "raid1", 00:19:15.343 "superblock": true, 00:19:15.343 "num_base_bdevs": 2, 00:19:15.343 "num_base_bdevs_discovered": 2, 00:19:15.343 "num_base_bdevs_operational": 2, 00:19:15.343 "process": { 00:19:15.343 "type": "rebuild", 00:19:15.343 "target": "spare", 00:19:15.343 "progress": { 00:19:15.343 "blocks": 5632, 00:19:15.343 "percent": 70 00:19:15.343 } 00:19:15.343 }, 00:19:15.343 "base_bdevs_list": [ 00:19:15.343 { 00:19:15.343 "name": "spare", 00:19:15.343 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:15.343 "is_configured": true, 00:19:15.343 "data_offset": 256, 00:19:15.343 "data_size": 7936 00:19:15.343 }, 00:19:15.343 { 00:19:15.343 "name": "BaseBdev2", 00:19:15.343 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:15.343 "is_configured": true, 00:19:15.343 "data_offset": 256, 00:19:15.343 "data_size": 7936 00:19:15.343 } 00:19:15.343 ] 00:19:15.343 }' 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.343 10:42:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.912 [2024-11-20 10:42:19.258871] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:15.912 [2024-11-20 10:42:19.258932] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:15.912 [2024-11-20 10:42:19.259027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.171 "name": "raid_bdev1", 00:19:16.171 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:16.171 "strip_size_kb": 0, 00:19:16.171 "state": "online", 00:19:16.171 "raid_level": "raid1", 00:19:16.171 "superblock": true, 00:19:16.171 "num_base_bdevs": 2, 00:19:16.171 "num_base_bdevs_discovered": 2, 00:19:16.171 "num_base_bdevs_operational": 2, 00:19:16.171 "base_bdevs_list": [ 00:19:16.171 { 00:19:16.171 "name": "spare", 00:19:16.171 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:16.171 "is_configured": true, 00:19:16.171 "data_offset": 256, 00:19:16.171 "data_size": 7936 00:19:16.171 }, 00:19:16.171 { 00:19:16.171 "name": "BaseBdev2", 00:19:16.171 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:16.171 "is_configured": true, 00:19:16.171 "data_offset": 256, 00:19:16.171 "data_size": 7936 00:19:16.171 } 00:19:16.171 ] 00:19:16.171 }' 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:16.171 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.431 "name": "raid_bdev1", 00:19:16.431 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:16.431 "strip_size_kb": 0, 00:19:16.431 "state": "online", 00:19:16.431 "raid_level": "raid1", 00:19:16.431 "superblock": true, 00:19:16.431 "num_base_bdevs": 2, 00:19:16.431 "num_base_bdevs_discovered": 2, 00:19:16.431 "num_base_bdevs_operational": 2, 00:19:16.431 "base_bdevs_list": [ 00:19:16.431 { 00:19:16.431 "name": "spare", 00:19:16.431 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:16.431 "is_configured": true, 00:19:16.431 "data_offset": 256, 00:19:16.431 "data_size": 7936 00:19:16.431 }, 00:19:16.431 { 00:19:16.431 "name": "BaseBdev2", 00:19:16.431 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:16.431 "is_configured": true, 00:19:16.431 "data_offset": 256, 00:19:16.431 "data_size": 7936 00:19:16.431 } 00:19:16.431 ] 00:19:16.431 }' 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.431 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.432 "name": "raid_bdev1", 00:19:16.432 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:16.432 "strip_size_kb": 0, 00:19:16.432 "state": "online", 00:19:16.432 "raid_level": "raid1", 00:19:16.432 "superblock": true, 00:19:16.432 "num_base_bdevs": 2, 00:19:16.432 "num_base_bdevs_discovered": 2, 00:19:16.432 "num_base_bdevs_operational": 2, 00:19:16.432 "base_bdevs_list": [ 00:19:16.432 { 00:19:16.432 "name": "spare", 00:19:16.432 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:16.432 "is_configured": true, 00:19:16.432 "data_offset": 256, 00:19:16.432 "data_size": 7936 00:19:16.432 }, 00:19:16.432 { 00:19:16.432 "name": "BaseBdev2", 00:19:16.432 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:16.432 "is_configured": true, 00:19:16.432 "data_offset": 256, 00:19:16.432 "data_size": 7936 00:19:16.432 } 00:19:16.432 ] 00:19:16.432 }' 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.432 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 [2024-11-20 10:42:20.253859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:17.001 [2024-11-20 10:42:20.253936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.001 [2024-11-20 10:42:20.254040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.001 [2024-11-20 10:42:20.254124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.001 [2024-11-20 10:42:20.254170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 [2024-11-20 10:42:20.329709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.001 [2024-11-20 10:42:20.329800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.001 [2024-11-20 10:42:20.329852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:17.001 [2024-11-20 10:42:20.329879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.001 [2024-11-20 10:42:20.331737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.001 [2024-11-20 10:42:20.331829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.001 [2024-11-20 10:42:20.331891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:17.001 [2024-11-20 10:42:20.331954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.001 [2024-11-20 10:42:20.332063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.001 spare 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 [2024-11-20 10:42:20.431953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:17.001 [2024-11-20 10:42:20.431982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:17.001 [2024-11-20 10:42:20.432074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:17.001 [2024-11-20 10:42:20.432152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:17.001 [2024-11-20 10:42:20.432160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:17.001 [2024-11-20 10:42:20.432241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.260 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.260 "name": "raid_bdev1", 00:19:17.260 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:17.260 "strip_size_kb": 0, 00:19:17.260 "state": "online", 00:19:17.260 "raid_level": "raid1", 00:19:17.260 "superblock": true, 00:19:17.260 "num_base_bdevs": 2, 00:19:17.260 "num_base_bdevs_discovered": 2, 00:19:17.260 "num_base_bdevs_operational": 2, 00:19:17.260 "base_bdevs_list": [ 00:19:17.260 { 00:19:17.260 "name": "spare", 00:19:17.260 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:17.260 "is_configured": true, 00:19:17.260 "data_offset": 256, 00:19:17.260 "data_size": 7936 00:19:17.260 }, 00:19:17.260 { 00:19:17.260 "name": "BaseBdev2", 00:19:17.260 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:17.260 "is_configured": true, 00:19:17.260 "data_offset": 256, 00:19:17.260 "data_size": 7936 00:19:17.260 } 00:19:17.260 ] 00:19:17.260 }' 00:19:17.260 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.260 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.520 "name": "raid_bdev1", 00:19:17.520 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:17.520 "strip_size_kb": 0, 00:19:17.520 "state": "online", 00:19:17.520 "raid_level": "raid1", 00:19:17.520 "superblock": true, 00:19:17.520 "num_base_bdevs": 2, 00:19:17.520 "num_base_bdevs_discovered": 2, 00:19:17.520 "num_base_bdevs_operational": 2, 00:19:17.520 "base_bdevs_list": [ 00:19:17.520 { 00:19:17.520 "name": "spare", 00:19:17.520 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:17.520 "is_configured": true, 00:19:17.520 "data_offset": 256, 00:19:17.520 "data_size": 7936 00:19:17.520 }, 00:19:17.520 { 00:19:17.520 "name": "BaseBdev2", 00:19:17.520 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:17.520 "is_configured": true, 00:19:17.520 "data_offset": 256, 00:19:17.520 "data_size": 7936 00:19:17.520 } 00:19:17.520 ] 00:19:17.520 }' 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.520 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.780 [2024-11-20 10:42:21.028567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.780 "name": "raid_bdev1", 00:19:17.780 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:17.780 "strip_size_kb": 0, 00:19:17.780 "state": "online", 00:19:17.780 "raid_level": "raid1", 00:19:17.780 "superblock": true, 00:19:17.780 "num_base_bdevs": 2, 00:19:17.780 "num_base_bdevs_discovered": 1, 00:19:17.780 "num_base_bdevs_operational": 1, 00:19:17.780 "base_bdevs_list": [ 00:19:17.780 { 00:19:17.780 "name": null, 00:19:17.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.780 "is_configured": false, 00:19:17.780 "data_offset": 0, 00:19:17.780 "data_size": 7936 00:19:17.780 }, 00:19:17.780 { 00:19:17.780 "name": "BaseBdev2", 00:19:17.780 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:17.780 "is_configured": true, 00:19:17.780 "data_offset": 256, 00:19:17.780 "data_size": 7936 00:19:17.780 } 00:19:17.780 ] 00:19:17.780 }' 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.780 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.039 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.039 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.039 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.039 [2024-11-20 10:42:21.503882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.039 [2024-11-20 10:42:21.504115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.039 [2024-11-20 10:42:21.504175] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.039 [2024-11-20 10:42:21.504242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.298 [2024-11-20 10:42:21.519042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:18.298 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.298 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:18.298 [2024-11-20 10:42:21.520856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.234 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.234 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.234 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.234 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.235 "name": "raid_bdev1", 00:19:19.235 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:19.235 "strip_size_kb": 0, 00:19:19.235 "state": "online", 00:19:19.235 "raid_level": "raid1", 00:19:19.235 "superblock": true, 00:19:19.235 "num_base_bdevs": 2, 00:19:19.235 "num_base_bdevs_discovered": 2, 00:19:19.235 "num_base_bdevs_operational": 2, 00:19:19.235 "process": { 00:19:19.235 "type": "rebuild", 00:19:19.235 "target": "spare", 00:19:19.235 "progress": { 00:19:19.235 "blocks": 2560, 00:19:19.235 "percent": 32 00:19:19.235 } 00:19:19.235 }, 00:19:19.235 "base_bdevs_list": [ 00:19:19.235 { 00:19:19.235 "name": "spare", 00:19:19.235 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:19.235 "is_configured": true, 00:19:19.235 "data_offset": 256, 00:19:19.235 "data_size": 7936 00:19:19.235 }, 00:19:19.235 { 00:19:19.235 "name": "BaseBdev2", 00:19:19.235 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:19.235 "is_configured": true, 00:19:19.235 "data_offset": 256, 00:19:19.235 "data_size": 7936 00:19:19.235 } 00:19:19.235 ] 00:19:19.235 }' 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.235 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.235 [2024-11-20 10:42:22.672576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.493 [2024-11-20 10:42:22.725501] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.493 [2024-11-20 10:42:22.725593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.493 [2024-11-20 10:42:22.725608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.493 [2024-11-20 10:42:22.725617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.493 "name": "raid_bdev1", 00:19:19.493 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:19.493 "strip_size_kb": 0, 00:19:19.493 "state": "online", 00:19:19.493 "raid_level": "raid1", 00:19:19.493 "superblock": true, 00:19:19.493 "num_base_bdevs": 2, 00:19:19.493 "num_base_bdevs_discovered": 1, 00:19:19.493 "num_base_bdevs_operational": 1, 00:19:19.493 "base_bdevs_list": [ 00:19:19.493 { 00:19:19.493 "name": null, 00:19:19.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.493 "is_configured": false, 00:19:19.493 "data_offset": 0, 00:19:19.493 "data_size": 7936 00:19:19.493 }, 00:19:19.493 { 00:19:19.493 "name": "BaseBdev2", 00:19:19.493 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:19.493 "is_configured": true, 00:19:19.493 "data_offset": 256, 00:19:19.493 "data_size": 7936 00:19:19.493 } 00:19:19.493 ] 00:19:19.493 }' 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.493 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.752 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:19.752 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.752 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.752 [2024-11-20 10:42:23.222135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:19.752 [2024-11-20 10:42:23.222197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.752 [2024-11-20 10:42:23.222219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:19.752 [2024-11-20 10:42:23.222230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.752 [2024-11-20 10:42:23.222431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.752 [2024-11-20 10:42:23.222454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:19.752 [2024-11-20 10:42:23.222505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:19.752 [2024-11-20 10:42:23.222518] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.752 [2024-11-20 10:42:23.222527] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.752 [2024-11-20 10:42:23.222555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.011 [2024-11-20 10:42:23.237590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:20.011 spare 00:19:20.011 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.011 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:20.011 [2024-11-20 10:42:23.239352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.950 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.951 "name": "raid_bdev1", 00:19:20.951 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:20.951 "strip_size_kb": 0, 00:19:20.951 "state": "online", 00:19:20.951 "raid_level": "raid1", 00:19:20.951 "superblock": true, 00:19:20.951 "num_base_bdevs": 2, 00:19:20.951 "num_base_bdevs_discovered": 2, 00:19:20.951 "num_base_bdevs_operational": 2, 00:19:20.951 "process": { 00:19:20.951 "type": "rebuild", 00:19:20.951 "target": "spare", 00:19:20.951 "progress": { 00:19:20.951 "blocks": 2560, 00:19:20.951 "percent": 32 00:19:20.951 } 00:19:20.951 }, 00:19:20.951 "base_bdevs_list": [ 00:19:20.951 { 00:19:20.951 "name": "spare", 00:19:20.951 "uuid": "0b61ebef-e9fb-5114-ae11-dbe8524bc2f0", 00:19:20.951 "is_configured": true, 00:19:20.951 "data_offset": 256, 00:19:20.951 "data_size": 7936 00:19:20.951 }, 00:19:20.951 { 00:19:20.951 "name": "BaseBdev2", 00:19:20.951 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:20.951 "is_configured": true, 00:19:20.951 "data_offset": 256, 00:19:20.951 "data_size": 7936 00:19:20.951 } 00:19:20.951 ] 00:19:20.951 }' 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.951 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.951 [2024-11-20 10:42:24.400186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.211 [2024-11-20 10:42:24.444054] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.211 [2024-11-20 10:42:24.444172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.211 [2024-11-20 10:42:24.444209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.211 [2024-11-20 10:42:24.444229] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.211 "name": "raid_bdev1", 00:19:21.211 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:21.211 "strip_size_kb": 0, 00:19:21.211 "state": "online", 00:19:21.211 "raid_level": "raid1", 00:19:21.211 "superblock": true, 00:19:21.211 "num_base_bdevs": 2, 00:19:21.211 "num_base_bdevs_discovered": 1, 00:19:21.211 "num_base_bdevs_operational": 1, 00:19:21.211 "base_bdevs_list": [ 00:19:21.211 { 00:19:21.211 "name": null, 00:19:21.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.211 "is_configured": false, 00:19:21.211 "data_offset": 0, 00:19:21.211 "data_size": 7936 00:19:21.211 }, 00:19:21.211 { 00:19:21.211 "name": "BaseBdev2", 00:19:21.211 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:21.211 "is_configured": true, 00:19:21.211 "data_offset": 256, 00:19:21.211 "data_size": 7936 00:19:21.211 } 00:19:21.211 ] 00:19:21.211 }' 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.211 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.471 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.731 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.731 "name": "raid_bdev1", 00:19:21.731 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:21.731 "strip_size_kb": 0, 00:19:21.731 "state": "online", 00:19:21.731 "raid_level": "raid1", 00:19:21.731 "superblock": true, 00:19:21.731 "num_base_bdevs": 2, 00:19:21.731 "num_base_bdevs_discovered": 1, 00:19:21.731 "num_base_bdevs_operational": 1, 00:19:21.731 "base_bdevs_list": [ 00:19:21.731 { 00:19:21.731 "name": null, 00:19:21.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.731 "is_configured": false, 00:19:21.731 "data_offset": 0, 00:19:21.731 "data_size": 7936 00:19:21.731 }, 00:19:21.731 { 00:19:21.731 "name": "BaseBdev2", 00:19:21.731 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:21.731 "is_configured": true, 00:19:21.731 "data_offset": 256, 00:19:21.731 "data_size": 7936 00:19:21.731 } 00:19:21.731 ] 00:19:21.731 }' 00:19:21.731 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.731 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.731 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.731 [2024-11-20 10:42:25.028817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:21.731 [2024-11-20 10:42:25.028933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.731 [2024-11-20 10:42:25.028973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:21.731 [2024-11-20 10:42:25.029002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.731 [2024-11-20 10:42:25.029176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.731 [2024-11-20 10:42:25.029219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:21.731 [2024-11-20 10:42:25.029294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:21.731 [2024-11-20 10:42:25.029329] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.731 [2024-11-20 10:42:25.029414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.731 [2024-11-20 10:42:25.029445] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:21.731 BaseBdev1 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.731 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.669 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.669 "name": "raid_bdev1", 00:19:22.669 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:22.670 "strip_size_kb": 0, 00:19:22.670 "state": "online", 00:19:22.670 "raid_level": "raid1", 00:19:22.670 "superblock": true, 00:19:22.670 "num_base_bdevs": 2, 00:19:22.670 "num_base_bdevs_discovered": 1, 00:19:22.670 "num_base_bdevs_operational": 1, 00:19:22.670 "base_bdevs_list": [ 00:19:22.670 { 00:19:22.670 "name": null, 00:19:22.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.670 "is_configured": false, 00:19:22.670 "data_offset": 0, 00:19:22.670 "data_size": 7936 00:19:22.670 }, 00:19:22.670 { 00:19:22.670 "name": "BaseBdev2", 00:19:22.670 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:22.670 "is_configured": true, 00:19:22.670 "data_offset": 256, 00:19:22.670 "data_size": 7936 00:19:22.670 } 00:19:22.670 ] 00:19:22.670 }' 00:19:22.670 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.670 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.239 "name": "raid_bdev1", 00:19:23.239 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:23.239 "strip_size_kb": 0, 00:19:23.239 "state": "online", 00:19:23.239 "raid_level": "raid1", 00:19:23.239 "superblock": true, 00:19:23.239 "num_base_bdevs": 2, 00:19:23.239 "num_base_bdevs_discovered": 1, 00:19:23.239 "num_base_bdevs_operational": 1, 00:19:23.239 "base_bdevs_list": [ 00:19:23.239 { 00:19:23.239 "name": null, 00:19:23.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.239 "is_configured": false, 00:19:23.239 "data_offset": 0, 00:19:23.239 "data_size": 7936 00:19:23.239 }, 00:19:23.239 { 00:19:23.239 "name": "BaseBdev2", 00:19:23.239 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:23.239 "is_configured": true, 00:19:23.239 "data_offset": 256, 00:19:23.239 "data_size": 7936 00:19:23.239 } 00:19:23.239 ] 00:19:23.239 }' 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.239 [2024-11-20 10:42:26.634292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.239 [2024-11-20 10:42:26.634468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.239 [2024-11-20 10:42:26.634505] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.239 request: 00:19:23.239 { 00:19:23.239 "base_bdev": "BaseBdev1", 00:19:23.239 "raid_bdev": "raid_bdev1", 00:19:23.239 "method": "bdev_raid_add_base_bdev", 00:19:23.239 "req_id": 1 00:19:23.239 } 00:19:23.239 Got JSON-RPC error response 00:19:23.239 response: 00:19:23.239 { 00:19:23.239 "code": -22, 00:19:23.239 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:23.239 } 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.239 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.202 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.493 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.493 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.493 "name": "raid_bdev1", 00:19:24.493 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:24.493 "strip_size_kb": 0, 00:19:24.493 "state": "online", 00:19:24.493 "raid_level": "raid1", 00:19:24.493 "superblock": true, 00:19:24.493 "num_base_bdevs": 2, 00:19:24.493 "num_base_bdevs_discovered": 1, 00:19:24.493 "num_base_bdevs_operational": 1, 00:19:24.493 "base_bdevs_list": [ 00:19:24.493 { 00:19:24.493 "name": null, 00:19:24.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.493 "is_configured": false, 00:19:24.493 "data_offset": 0, 00:19:24.493 "data_size": 7936 00:19:24.493 }, 00:19:24.493 { 00:19:24.493 "name": "BaseBdev2", 00:19:24.493 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:24.493 "is_configured": true, 00:19:24.493 "data_offset": 256, 00:19:24.493 "data_size": 7936 00:19:24.493 } 00:19:24.493 ] 00:19:24.493 }' 00:19:24.493 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.493 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.754 "name": "raid_bdev1", 00:19:24.754 "uuid": "a611751f-fc79-408c-a516-2a103933e8e4", 00:19:24.754 "strip_size_kb": 0, 00:19:24.754 "state": "online", 00:19:24.754 "raid_level": "raid1", 00:19:24.754 "superblock": true, 00:19:24.754 "num_base_bdevs": 2, 00:19:24.754 "num_base_bdevs_discovered": 1, 00:19:24.754 "num_base_bdevs_operational": 1, 00:19:24.754 "base_bdevs_list": [ 00:19:24.754 { 00:19:24.754 "name": null, 00:19:24.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.754 "is_configured": false, 00:19:24.754 "data_offset": 0, 00:19:24.754 "data_size": 7936 00:19:24.754 }, 00:19:24.754 { 00:19:24.754 "name": "BaseBdev2", 00:19:24.754 "uuid": "dcdd83ee-965a-5303-af09-8633f0ec1007", 00:19:24.754 "is_configured": true, 00:19:24.754 "data_offset": 256, 00:19:24.754 "data_size": 7936 00:19:24.754 } 00:19:24.754 ] 00:19:24.754 }' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89167 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89167 ']' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89167 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89167 00:19:24.754 killing process with pid 89167 00:19:24.754 Received shutdown signal, test time was about 60.000000 seconds 00:19:24.754 00:19:24.754 Latency(us) 00:19:24.754 [2024-11-20T10:42:28.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.754 [2024-11-20T10:42:28.233Z] =================================================================================================================== 00:19:24.754 [2024-11-20T10:42:28.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89167' 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89167 00:19:24.754 [2024-11-20 10:42:28.194696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.754 [2024-11-20 10:42:28.194812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.754 [2024-11-20 10:42:28.194856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.754 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89167 00:19:24.754 [2024-11-20 10:42:28.194866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:25.014 [2024-11-20 10:42:28.482544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.392 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:26.392 00:19:26.392 real 0m17.222s 00:19:26.392 user 0m22.592s 00:19:26.392 sys 0m1.532s 00:19:26.392 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.392 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.392 ************************************ 00:19:26.392 END TEST raid_rebuild_test_sb_md_interleaved 00:19:26.392 ************************************ 00:19:26.392 10:42:29 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:26.392 10:42:29 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:26.392 10:42:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89167 ']' 00:19:26.392 10:42:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89167 00:19:26.392 10:42:29 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:26.392 00:19:26.392 real 12m4.386s 00:19:26.392 user 16m22.779s 00:19:26.392 sys 1m49.985s 00:19:26.392 10:42:29 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.392 ************************************ 00:19:26.392 END TEST bdev_raid 00:19:26.392 ************************************ 00:19:26.392 10:42:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.392 10:42:29 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:26.392 10:42:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:26.392 10:42:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.392 10:42:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.392 ************************************ 00:19:26.392 START TEST spdkcli_raid 00:19:26.392 ************************************ 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:26.392 * Looking for test storage... 00:19:26.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.392 10:42:29 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.392 --rc genhtml_branch_coverage=1 00:19:26.392 --rc genhtml_function_coverage=1 00:19:26.392 --rc genhtml_legend=1 00:19:26.392 --rc geninfo_all_blocks=1 00:19:26.392 --rc geninfo_unexecuted_blocks=1 00:19:26.392 00:19:26.392 ' 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.392 --rc genhtml_branch_coverage=1 00:19:26.392 --rc genhtml_function_coverage=1 00:19:26.392 --rc genhtml_legend=1 00:19:26.392 --rc geninfo_all_blocks=1 00:19:26.392 --rc geninfo_unexecuted_blocks=1 00:19:26.392 00:19:26.392 ' 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.392 --rc genhtml_branch_coverage=1 00:19:26.392 --rc genhtml_function_coverage=1 00:19:26.392 --rc genhtml_legend=1 00:19:26.392 --rc geninfo_all_blocks=1 00:19:26.392 --rc geninfo_unexecuted_blocks=1 00:19:26.392 00:19:26.392 ' 00:19:26.392 10:42:29 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.392 --rc genhtml_branch_coverage=1 00:19:26.392 --rc genhtml_function_coverage=1 00:19:26.392 --rc genhtml_legend=1 00:19:26.392 --rc geninfo_all_blocks=1 00:19:26.392 --rc geninfo_unexecuted_blocks=1 00:19:26.392 00:19:26.392 ' 00:19:26.392 10:42:29 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:26.392 10:42:29 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:26.392 10:42:29 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:26.392 10:42:29 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:26.393 10:42:29 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:26.393 10:42:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.393 10:42:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89839 00:19:26.393 10:42:29 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:26.652 10:42:29 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89839 00:19:26.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.652 10:42:29 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89839 ']' 00:19:26.652 10:42:29 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.652 10:42:29 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.652 10:42:29 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.652 10:42:29 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.652 10:42:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.652 [2024-11-20 10:42:29.993601] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:26.652 [2024-11-20 10:42:29.993926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89839 ] 00:19:26.911 [2024-11-20 10:42:30.181614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:26.911 [2024-11-20 10:42:30.288218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.911 [2024-11-20 10:42:30.288267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.849 10:42:31 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.849 10:42:31 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:27.849 10:42:31 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:27.849 10:42:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.849 10:42:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.849 10:42:31 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:27.850 10:42:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.850 10:42:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.850 10:42:31 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:27.850 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:27.850 ' 00:19:29.228 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:29.228 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:29.488 10:42:32 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:29.488 10:42:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.488 10:42:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.488 10:42:32 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:29.488 10:42:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.488 10:42:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.488 10:42:32 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:29.488 ' 00:19:30.444 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:30.704 10:42:33 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:30.704 10:42:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.704 10:42:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.704 10:42:34 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:30.704 10:42:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.704 10:42:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.704 10:42:34 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:30.704 10:42:34 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:31.274 10:42:34 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:31.275 10:42:34 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:31.275 10:42:34 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:31.275 10:42:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.275 10:42:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.275 10:42:34 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:31.275 10:42:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.275 10:42:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.275 10:42:34 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:31.275 ' 00:19:32.214 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:32.474 10:42:35 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:32.474 10:42:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.474 10:42:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.474 10:42:35 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:32.474 10:42:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.474 10:42:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.474 10:42:35 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:32.474 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:32.474 ' 00:19:33.856 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:33.856 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:33.856 10:42:37 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.856 10:42:37 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89839 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89839 ']' 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89839 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.856 10:42:37 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89839 00:19:34.117 10:42:37 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.117 10:42:37 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.117 10:42:37 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89839' 00:19:34.117 killing process with pid 89839 00:19:34.117 10:42:37 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89839 00:19:34.117 10:42:37 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89839 00:19:36.675 10:42:39 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:36.675 10:42:39 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89839 ']' 00:19:36.675 10:42:39 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89839 00:19:36.675 10:42:39 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89839 ']' 00:19:36.675 10:42:39 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89839 00:19:36.675 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89839) - No such process 00:19:36.676 10:42:39 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89839 is not found' 00:19:36.676 Process with pid 89839 is not found 00:19:36.676 10:42:39 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:36.676 10:42:39 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:36.676 10:42:39 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:36.676 10:42:39 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:36.676 00:19:36.676 real 0m9.959s 00:19:36.676 user 0m20.583s 00:19:36.676 sys 0m1.153s 00:19:36.676 10:42:39 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.676 ************************************ 00:19:36.676 END TEST spdkcli_raid 00:19:36.676 ************************************ 00:19:36.676 10:42:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.676 10:42:39 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:36.676 10:42:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.676 10:42:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.676 10:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.676 ************************************ 00:19:36.676 START TEST blockdev_raid5f 00:19:36.676 ************************************ 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:36.676 * Looking for test storage... 00:19:36.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.676 10:42:39 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:36.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.676 --rc genhtml_branch_coverage=1 00:19:36.676 --rc genhtml_function_coverage=1 00:19:36.676 --rc genhtml_legend=1 00:19:36.676 --rc geninfo_all_blocks=1 00:19:36.676 --rc geninfo_unexecuted_blocks=1 00:19:36.676 00:19:36.676 ' 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:36.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.676 --rc genhtml_branch_coverage=1 00:19:36.676 --rc genhtml_function_coverage=1 00:19:36.676 --rc genhtml_legend=1 00:19:36.676 --rc geninfo_all_blocks=1 00:19:36.676 --rc geninfo_unexecuted_blocks=1 00:19:36.676 00:19:36.676 ' 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:36.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.676 --rc genhtml_branch_coverage=1 00:19:36.676 --rc genhtml_function_coverage=1 00:19:36.676 --rc genhtml_legend=1 00:19:36.676 --rc geninfo_all_blocks=1 00:19:36.676 --rc geninfo_unexecuted_blocks=1 00:19:36.676 00:19:36.676 ' 00:19:36.676 10:42:39 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:36.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.676 --rc genhtml_branch_coverage=1 00:19:36.676 --rc genhtml_function_coverage=1 00:19:36.676 --rc genhtml_legend=1 00:19:36.676 --rc geninfo_all_blocks=1 00:19:36.676 --rc geninfo_unexecuted_blocks=1 00:19:36.676 00:19:36.676 ' 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:36.676 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:36.677 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:36.677 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:36.677 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90118 00:19:36.677 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:36.677 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:36.677 10:42:39 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90118 00:19:36.677 10:42:39 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90118 ']' 00:19:36.677 10:42:39 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.677 10:42:39 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.677 10:42:39 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.677 10:42:39 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.677 10:42:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:36.677 [2024-11-20 10:42:39.987969] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:36.677 [2024-11-20 10:42:39.988153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90118 ] 00:19:36.935 [2024-11-20 10:42:40.159407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.936 [2024-11-20 10:42:40.259992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.874 Malloc0 00:19:37.874 Malloc1 00:19:37.874 Malloc2 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.874 10:42:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "030b4ac6-2bda-4cb5-b72e-64ffbcbe4950"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "030b4ac6-2bda-4cb5-b72e-64ffbcbe4950",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "030b4ac6-2bda-4cb5-b72e-64ffbcbe4950",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "10a5306e-8409-4f19-aa6c-2ad2886b4f12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "42ca6a23-802a-4f13-af9d-2349424faab1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e9235c54-5400-473b-b7bb-1188473e0479",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:37.874 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:38.134 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:38.134 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:38.134 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:38.134 10:42:41 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90118 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90118 ']' 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90118 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90118 00:19:38.134 killing process with pid 90118 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90118' 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90118 00:19:38.134 10:42:41 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90118 00:19:40.675 10:42:43 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:40.675 10:42:43 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:40.675 10:42:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:40.675 10:42:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.675 10:42:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.675 ************************************ 00:19:40.675 START TEST bdev_hello_world 00:19:40.675 ************************************ 00:19:40.675 10:42:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:40.675 [2024-11-20 10:42:43.966148] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:40.675 [2024-11-20 10:42:43.966253] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90181 ] 00:19:40.675 [2024-11-20 10:42:44.138203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.935 [2024-11-20 10:42:44.244271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.504 [2024-11-20 10:42:44.752698] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:41.504 [2024-11-20 10:42:44.752741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:41.504 [2024-11-20 10:42:44.752757] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:41.504 [2024-11-20 10:42:44.753216] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:41.504 [2024-11-20 10:42:44.753341] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:41.504 [2024-11-20 10:42:44.753356] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:41.504 [2024-11-20 10:42:44.753435] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:41.504 00:19:41.504 [2024-11-20 10:42:44.753452] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:42.886 ************************************ 00:19:42.886 00:19:42.886 real 0m2.162s 00:19:42.886 user 0m1.802s 00:19:42.886 sys 0m0.240s 00:19:42.886 10:42:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.886 10:42:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:42.886 END TEST bdev_hello_world 00:19:42.886 ************************************ 00:19:42.886 10:42:46 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:42.886 10:42:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:42.886 10:42:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.886 10:42:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.886 ************************************ 00:19:42.886 START TEST bdev_bounds 00:19:42.886 ************************************ 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90223 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90223' 00:19:42.886 Process bdevio pid: 90223 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90223 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90223 ']' 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.886 10:42:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:42.886 [2024-11-20 10:42:46.199864] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:42.886 [2024-11-20 10:42:46.200003] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90223 ] 00:19:43.145 [2024-11-20 10:42:46.373098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.145 [2024-11-20 10:42:46.482769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.145 [2024-11-20 10:42:46.482906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.145 [2024-11-20 10:42:46.482943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.714 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.714 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:43.714 10:42:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:43.714 I/O targets: 00:19:43.714 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:43.714 00:19:43.714 00:19:43.714 CUnit - A unit testing framework for C - Version 2.1-3 00:19:43.714 http://cunit.sourceforge.net/ 00:19:43.714 00:19:43.714 00:19:43.714 Suite: bdevio tests on: raid5f 00:19:43.714 Test: blockdev write read block ...passed 00:19:43.714 Test: blockdev write zeroes read block ...passed 00:19:43.714 Test: blockdev write zeroes read no split ...passed 00:19:43.974 Test: blockdev write zeroes read split ...passed 00:19:43.974 Test: blockdev write zeroes read split partial ...passed 00:19:43.974 Test: blockdev reset ...passed 00:19:43.974 Test: blockdev write read 8 blocks ...passed 00:19:43.974 Test: blockdev write read size > 128k ...passed 00:19:43.974 Test: blockdev write read invalid size ...passed 00:19:43.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:43.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:43.974 Test: blockdev write read max offset ...passed 00:19:43.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:43.974 Test: blockdev writev readv 8 blocks ...passed 00:19:43.974 Test: blockdev writev readv 30 x 1block ...passed 00:19:43.974 Test: blockdev writev readv block ...passed 00:19:43.974 Test: blockdev writev readv size > 128k ...passed 00:19:43.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:43.974 Test: blockdev comparev and writev ...passed 00:19:43.974 Test: blockdev nvme passthru rw ...passed 00:19:43.974 Test: blockdev nvme passthru vendor specific ...passed 00:19:43.974 Test: blockdev nvme admin passthru ...passed 00:19:43.974 Test: blockdev copy ...passed 00:19:43.974 00:19:43.974 Run Summary: Type Total Ran Passed Failed Inactive 00:19:43.974 suites 1 1 n/a 0 0 00:19:43.974 tests 23 23 23 0 0 00:19:43.974 asserts 130 130 130 0 n/a 00:19:43.974 00:19:43.974 Elapsed time = 0.514 seconds 00:19:43.974 0 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90223 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90223 ']' 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90223 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90223 00:19:43.974 killing process with pid 90223 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90223' 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90223 00:19:43.974 10:42:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90223 00:19:45.356 10:42:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:45.356 00:19:45.356 real 0m2.623s 00:19:45.356 user 0m6.568s 00:19:45.356 sys 0m0.346s 00:19:45.356 10:42:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.356 10:42:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:45.356 ************************************ 00:19:45.356 END TEST bdev_bounds 00:19:45.356 ************************************ 00:19:45.356 10:42:48 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:45.356 10:42:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:45.356 10:42:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.356 10:42:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.356 ************************************ 00:19:45.356 START TEST bdev_nbd 00:19:45.356 ************************************ 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90277 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90277 /var/tmp/spdk-nbd.sock 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90277 ']' 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:45.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.356 10:42:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:45.616 [2024-11-20 10:42:48.903887] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:45.617 [2024-11-20 10:42:48.904079] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.617 [2024-11-20 10:42:49.076709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.876 [2024-11-20 10:42:49.184193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:46.445 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.704 1+0 records in 00:19:46.704 1+0 records out 00:19:46.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597812 s, 6.9 MB/s 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:46.704 10:42:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:46.964 { 00:19:46.964 "nbd_device": "/dev/nbd0", 00:19:46.964 "bdev_name": "raid5f" 00:19:46.964 } 00:19:46.964 ]' 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:46.964 { 00:19:46.964 "nbd_device": "/dev/nbd0", 00:19:46.964 "bdev_name": "raid5f" 00:19:46.964 } 00:19:46.964 ]' 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.964 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.241 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:47.506 /dev/nbd0 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.506 1+0 records in 00:19:47.506 1+0 records out 00:19:47.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036626 s, 11.2 MB/s 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.506 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.507 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:47.507 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:47.507 10:42:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:47.765 { 00:19:47.765 "nbd_device": "/dev/nbd0", 00:19:47.765 "bdev_name": "raid5f" 00:19:47.765 } 00:19:47.765 ]' 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:47.765 { 00:19:47.765 "nbd_device": "/dev/nbd0", 00:19:47.765 "bdev_name": "raid5f" 00:19:47.765 } 00:19:47.765 ]' 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:47.765 256+0 records in 00:19:47.765 256+0 records out 00:19:47.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136127 s, 77.0 MB/s 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:47.765 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:48.023 256+0 records in 00:19:48.023 256+0 records out 00:19:48.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028705 s, 36.5 MB/s 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:48.023 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:48.281 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:48.539 malloc_lvol_verify 00:19:48.539 10:42:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:48.798 68743410-c573-4d99-85b8-43583740d144 00:19:48.798 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:49.056 007440b4-f13a-42d7-b4eb-39151b484f69 00:19:49.056 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:49.056 /dev/nbd0 00:19:49.056 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:49.056 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:49.056 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:49.056 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:49.056 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:49.056 mke2fs 1.47.0 (5-Feb-2023) 00:19:49.056 Discarding device blocks: 0/4096 done 00:19:49.056 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:49.056 00:19:49.056 Allocating group tables: 0/1 done 00:19:49.056 Writing inode tables: 0/1 done 00:19:49.315 Creating journal (1024 blocks): done 00:19:49.315 Writing superblocks and filesystem accounting information: 0/1 done 00:19:49.315 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90277 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90277 ']' 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90277 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90277 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.315 killing process with pid 90277 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90277' 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90277 00:19:49.315 10:42:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90277 00:19:50.694 10:42:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:50.694 00:19:50.694 real 0m5.325s 00:19:50.694 user 0m7.188s 00:19:50.694 sys 0m1.218s 00:19:50.694 10:42:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.694 10:42:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:50.694 ************************************ 00:19:50.694 END TEST bdev_nbd 00:19:50.694 ************************************ 00:19:50.954 10:42:54 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:50.954 10:42:54 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:50.954 10:42:54 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:50.954 10:42:54 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:50.954 10:42:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:50.954 10:42:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.954 10:42:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 ************************************ 00:19:50.954 START TEST bdev_fio 00:19:50.954 ************************************ 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:50.954 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:50.954 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:50.955 ************************************ 00:19:50.955 START TEST bdev_fio_rw_verify 00:19:50.955 ************************************ 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:50.955 10:42:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:51.215 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:51.215 fio-3.35 00:19:51.215 Starting 1 thread 00:20:03.433 00:20:03.433 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90485: Wed Nov 20 10:43:05 2024 00:20:03.433 read: IOPS=12.4k, BW=48.6MiB/s (51.0MB/s)(486MiB/10001msec) 00:20:03.433 slat (nsec): min=16792, max=55095, avg=19016.98, stdev=1781.74 00:20:03.433 clat (usec): min=9, max=303, avg=128.58, stdev=45.36 00:20:03.433 lat (usec): min=28, max=327, avg=147.59, stdev=45.55 00:20:03.433 clat percentiles (usec): 00:20:03.433 | 50.000th=[ 129], 99.000th=[ 212], 99.900th=[ 245], 99.990th=[ 265], 00:20:03.433 | 99.999th=[ 281] 00:20:03.433 write: IOPS=13.0k, BW=50.9MiB/s (53.3MB/s)(502MiB/9876msec); 0 zone resets 00:20:03.433 slat (usec): min=7, max=226, avg=16.43, stdev= 3.26 00:20:03.433 clat (usec): min=53, max=1180, avg=293.84, stdev=40.29 00:20:03.433 lat (usec): min=67, max=1344, avg=310.27, stdev=41.22 00:20:03.433 clat percentiles (usec): 00:20:03.433 | 50.000th=[ 297], 99.000th=[ 383], 99.900th=[ 523], 99.990th=[ 1057], 00:20:03.433 | 99.999th=[ 1172] 00:20:03.433 bw ( KiB/s): min=48152, max=54792, per=99.04%, avg=51579.37, stdev=1647.65, samples=19 00:20:03.433 iops : min=12038, max=13698, avg=12894.84, stdev=411.91, samples=19 00:20:03.433 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=16.61%, 250=39.86% 00:20:03.433 lat (usec) : 500=43.47%, 750=0.04%, 1000=0.01% 00:20:03.433 lat (msec) : 2=0.01% 00:20:03.433 cpu : usr=99.12%, sys=0.34%, ctx=27, majf=0, minf=10166 00:20:03.433 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.433 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.433 issued rwts: total=124427,128586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:03.433 00:20:03.433 Run status group 0 (all jobs): 00:20:03.433 READ: bw=48.6MiB/s (51.0MB/s), 48.6MiB/s-48.6MiB/s (51.0MB/s-51.0MB/s), io=486MiB (510MB), run=10001-10001msec 00:20:03.433 WRITE: bw=50.9MiB/s (53.3MB/s), 50.9MiB/s-50.9MiB/s (53.3MB/s-53.3MB/s), io=502MiB (527MB), run=9876-9876msec 00:20:03.693 ----------------------------------------------------- 00:20:03.693 Suppressions used: 00:20:03.693 count bytes template 00:20:03.693 1 7 /usr/src/fio/parse.c 00:20:03.693 310 29760 /usr/src/fio/iolog.c 00:20:03.693 1 8 libtcmalloc_minimal.so 00:20:03.693 1 904 libcrypto.so 00:20:03.693 ----------------------------------------------------- 00:20:03.693 00:20:03.693 00:20:03.693 real 0m12.620s 00:20:03.693 user 0m12.791s 00:20:03.693 sys 0m0.642s 00:20:03.693 10:43:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.693 10:43:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:03.693 ************************************ 00:20:03.693 END TEST bdev_fio_rw_verify 00:20:03.693 ************************************ 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "030b4ac6-2bda-4cb5-b72e-64ffbcbe4950"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "030b4ac6-2bda-4cb5-b72e-64ffbcbe4950",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "030b4ac6-2bda-4cb5-b72e-64ffbcbe4950",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "10a5306e-8409-4f19-aa6c-2ad2886b4f12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "42ca6a23-802a-4f13-af9d-2349424faab1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e9235c54-5400-473b-b7bb-1188473e0479",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:03.693 /home/vagrant/spdk_repo/spdk 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:03.693 00:20:03.693 real 0m12.891s 00:20:03.693 user 0m12.904s 00:20:03.693 sys 0m0.771s 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.693 10:43:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:03.693 ************************************ 00:20:03.693 END TEST bdev_fio 00:20:03.693 ************************************ 00:20:03.693 10:43:07 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:03.693 10:43:07 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:03.693 10:43:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:03.693 10:43:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.693 10:43:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:03.693 ************************************ 00:20:03.693 START TEST bdev_verify 00:20:03.693 ************************************ 00:20:03.694 10:43:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:03.953 [2024-11-20 10:43:07.247262] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:03.953 [2024-11-20 10:43:07.247389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90645 ] 00:20:03.953 [2024-11-20 10:43:07.420945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:04.214 [2024-11-20 10:43:07.531008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.214 [2024-11-20 10:43:07.531056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.783 Running I/O for 5 seconds... 00:20:06.662 17156.00 IOPS, 67.02 MiB/s [2024-11-20T10:43:11.082Z] 17791.50 IOPS, 69.50 MiB/s [2024-11-20T10:43:12.463Z] 16720.00 IOPS, 65.31 MiB/s [2024-11-20T10:43:13.034Z] 16947.00 IOPS, 66.20 MiB/s [2024-11-20T10:43:13.294Z] 17084.80 IOPS, 66.74 MiB/s 00:20:09.815 Latency(us) 00:20:09.815 [2024-11-20T10:43:13.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.815 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:09.815 Verification LBA range: start 0x0 length 0x2000 00:20:09.815 raid5f : 5.02 8558.29 33.43 0.00 0.00 22493.02 78.25 19918.37 00:20:09.815 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.815 Verification LBA range: start 0x2000 length 0x2000 00:20:09.815 raid5f : 5.02 8515.86 33.27 0.00 0.00 22560.14 345.21 21292.05 00:20:09.815 [2024-11-20T10:43:13.294Z] =================================================================================================================== 00:20:09.815 [2024-11-20T10:43:13.294Z] Total : 17074.15 66.70 0.00 0.00 22526.49 78.25 21292.05 00:20:11.196 00:20:11.196 real 0m7.179s 00:20:11.196 user 0m13.323s 00:20:11.196 sys 0m0.247s 00:20:11.196 10:43:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.196 10:43:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:11.196 ************************************ 00:20:11.196 END TEST bdev_verify 00:20:11.196 ************************************ 00:20:11.196 10:43:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:11.196 10:43:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:11.196 10:43:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.196 10:43:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:11.196 ************************************ 00:20:11.196 START TEST bdev_verify_big_io 00:20:11.196 ************************************ 00:20:11.196 10:43:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:11.196 [2024-11-20 10:43:14.481503] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:11.196 [2024-11-20 10:43:14.481634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90742 ] 00:20:11.196 [2024-11-20 10:43:14.653314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:11.456 [2024-11-20 10:43:14.759185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.456 [2024-11-20 10:43:14.759234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.049 Running I/O for 5 seconds... 00:20:13.927 760.00 IOPS, 47.50 MiB/s [2024-11-20T10:43:18.789Z] 823.00 IOPS, 51.44 MiB/s [2024-11-20T10:43:19.357Z] 846.00 IOPS, 52.88 MiB/s [2024-11-20T10:43:20.739Z] 888.50 IOPS, 55.53 MiB/s [2024-11-20T10:43:20.739Z] 914.00 IOPS, 57.12 MiB/s 00:20:17.260 Latency(us) 00:20:17.260 [2024-11-20T10:43:20.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.260 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:17.260 Verification LBA range: start 0x0 length 0x200 00:20:17.260 raid5f : 5.24 460.60 28.79 0.00 0.00 6840080.23 171.71 320525.41 00:20:17.260 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:17.260 Verification LBA range: start 0x200 length 0x200 00:20:17.260 raid5f : 5.25 459.13 28.70 0.00 0.00 6905724.09 134.15 322356.99 00:20:17.260 [2024-11-20T10:43:20.739Z] =================================================================================================================== 00:20:17.260 [2024-11-20T10:43:20.739Z] Total : 919.72 57.48 0.00 0.00 6872888.55 134.15 322356.99 00:20:18.641 00:20:18.641 real 0m7.433s 00:20:18.641 user 0m13.836s 00:20:18.641 sys 0m0.247s 00:20:18.641 10:43:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.641 10:43:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.641 ************************************ 00:20:18.641 END TEST bdev_verify_big_io 00:20:18.641 ************************************ 00:20:18.641 10:43:21 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:18.641 10:43:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:18.641 10:43:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.641 10:43:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:18.641 ************************************ 00:20:18.641 START TEST bdev_write_zeroes 00:20:18.641 ************************************ 00:20:18.641 10:43:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:18.641 [2024-11-20 10:43:21.989808] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:18.641 [2024-11-20 10:43:21.989924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90842 ] 00:20:18.901 [2024-11-20 10:43:22.168299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.901 [2024-11-20 10:43:22.272799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.472 Running I/O for 1 seconds... 00:20:20.411 30327.00 IOPS, 118.46 MiB/s 00:20:20.411 Latency(us) 00:20:20.411 [2024-11-20T10:43:23.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.411 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:20.411 raid5f : 1.01 30304.11 118.38 0.00 0.00 4210.68 1366.53 5752.29 00:20:20.411 [2024-11-20T10:43:23.890Z] =================================================================================================================== 00:20:20.411 [2024-11-20T10:43:23.890Z] Total : 30304.11 118.38 0.00 0.00 4210.68 1366.53 5752.29 00:20:21.793 00:20:21.793 real 0m3.192s 00:20:21.793 user 0m2.819s 00:20:21.793 sys 0m0.247s 00:20:21.793 10:43:25 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.793 10:43:25 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:21.793 ************************************ 00:20:21.793 END TEST bdev_write_zeroes 00:20:21.793 ************************************ 00:20:21.793 10:43:25 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:21.793 10:43:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:21.793 10:43:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.793 10:43:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:21.793 ************************************ 00:20:21.793 START TEST bdev_json_nonenclosed 00:20:21.793 ************************************ 00:20:21.793 10:43:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:21.793 [2024-11-20 10:43:25.247576] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:21.793 [2024-11-20 10:43:25.247709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90895 ] 00:20:22.052 [2024-11-20 10:43:25.419413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.052 [2024-11-20 10:43:25.525286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.052 [2024-11-20 10:43:25.525408] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:22.052 [2024-11-20 10:43:25.525435] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:22.052 [2024-11-20 10:43:25.525445] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:22.329 00:20:22.329 real 0m0.599s 00:20:22.329 user 0m0.368s 00:20:22.329 sys 0m0.127s 00:20:22.329 10:43:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.329 10:43:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:22.329 ************************************ 00:20:22.329 END TEST bdev_json_nonenclosed 00:20:22.329 ************************************ 00:20:22.616 10:43:25 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:22.616 10:43:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:22.616 10:43:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.616 10:43:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.616 ************************************ 00:20:22.616 START TEST bdev_json_nonarray 00:20:22.616 ************************************ 00:20:22.617 10:43:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:22.617 [2024-11-20 10:43:25.912528] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:22.617 [2024-11-20 10:43:25.912628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90919 ] 00:20:22.617 [2024-11-20 10:43:26.082807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.876 [2024-11-20 10:43:26.185746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.876 [2024-11-20 10:43:26.185854] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:22.876 [2024-11-20 10:43:26.185871] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:22.876 [2024-11-20 10:43:26.185889] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:23.136 00:20:23.136 real 0m0.589s 00:20:23.136 user 0m0.352s 00:20:23.136 sys 0m0.134s 00:20:23.136 10:43:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.136 10:43:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:23.136 ************************************ 00:20:23.136 END TEST bdev_json_nonarray 00:20:23.136 ************************************ 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:23.136 10:43:26 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:23.136 00:20:23.136 real 0m46.841s 00:20:23.136 user 1m3.462s 00:20:23.136 sys 0m4.625s 00:20:23.136 10:43:26 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.136 10:43:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.136 ************************************ 00:20:23.136 END TEST blockdev_raid5f 00:20:23.136 ************************************ 00:20:23.136 10:43:26 -- spdk/autotest.sh@194 -- # uname -s 00:20:23.136 10:43:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:23.136 10:43:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:23.136 10:43:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:23.136 10:43:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:23.136 10:43:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.136 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.136 10:43:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:23.136 10:43:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:23.136 10:43:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:23.136 10:43:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:23.396 10:43:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:23.396 10:43:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:23.396 10:43:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:23.396 10:43:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.396 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.396 10:43:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:23.396 10:43:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:23.396 10:43:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:23.396 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:20:25.305 INFO: APP EXITING 00:20:25.305 INFO: killing all VMs 00:20:25.305 INFO: killing vhost app 00:20:25.305 INFO: EXIT DONE 00:20:25.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.875 Waiting for block devices as requested 00:20:25.875 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:25.875 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:26.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:26.815 Cleaning 00:20:26.815 Removing: /var/run/dpdk/spdk0/config 00:20:26.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:26.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:26.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:26.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:26.815 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:26.815 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:26.815 Removing: /dev/shm/spdk_tgt_trace.pid56978 00:20:26.815 Removing: /var/run/dpdk/spdk0 00:20:26.815 Removing: /var/run/dpdk/spdk_pid56732 00:20:26.815 Removing: /var/run/dpdk/spdk_pid56978 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57218 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57322 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57378 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57517 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57535 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57745 00:20:26.815 Removing: /var/run/dpdk/spdk_pid57861 00:20:27.075 Removing: /var/run/dpdk/spdk_pid57969 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58097 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58205 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58250 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58285 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58357 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58490 00:20:27.075 Removing: /var/run/dpdk/spdk_pid58943 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59020 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59100 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59116 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59267 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59289 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59444 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59466 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59530 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59553 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59623 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59641 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59847 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59884 00:20:27.075 Removing: /var/run/dpdk/spdk_pid59967 00:20:27.075 Removing: /var/run/dpdk/spdk_pid61349 00:20:27.075 Removing: /var/run/dpdk/spdk_pid61555 00:20:27.075 Removing: /var/run/dpdk/spdk_pid61706 00:20:27.075 Removing: /var/run/dpdk/spdk_pid62355 00:20:27.075 Removing: /var/run/dpdk/spdk_pid62572 00:20:27.075 Removing: /var/run/dpdk/spdk_pid62718 00:20:27.075 Removing: /var/run/dpdk/spdk_pid63367 00:20:27.075 Removing: /var/run/dpdk/spdk_pid63697 00:20:27.075 Removing: /var/run/dpdk/spdk_pid63848 00:20:27.075 Removing: /var/run/dpdk/spdk_pid65233 00:20:27.075 Removing: /var/run/dpdk/spdk_pid65492 00:20:27.075 Removing: /var/run/dpdk/spdk_pid65637 00:20:27.075 Removing: /var/run/dpdk/spdk_pid67036 00:20:27.075 Removing: /var/run/dpdk/spdk_pid67287 00:20:27.075 Removing: /var/run/dpdk/spdk_pid67432 00:20:27.075 Removing: /var/run/dpdk/spdk_pid68826 00:20:27.075 Removing: /var/run/dpdk/spdk_pid69272 00:20:27.075 Removing: /var/run/dpdk/spdk_pid69413 00:20:27.075 Removing: /var/run/dpdk/spdk_pid70908 00:20:27.075 Removing: /var/run/dpdk/spdk_pid71177 00:20:27.075 Removing: /var/run/dpdk/spdk_pid71324 00:20:27.075 Removing: /var/run/dpdk/spdk_pid72812 00:20:27.075 Removing: /var/run/dpdk/spdk_pid73078 00:20:27.075 Removing: /var/run/dpdk/spdk_pid73218 00:20:27.075 Removing: /var/run/dpdk/spdk_pid74709 00:20:27.075 Removing: /var/run/dpdk/spdk_pid75197 00:20:27.075 Removing: /var/run/dpdk/spdk_pid75343 00:20:27.075 Removing: /var/run/dpdk/spdk_pid75492 00:20:27.075 Removing: /var/run/dpdk/spdk_pid75910 00:20:27.075 Removing: /var/run/dpdk/spdk_pid76642 00:20:27.075 Removing: /var/run/dpdk/spdk_pid77018 00:20:27.075 Removing: /var/run/dpdk/spdk_pid77701 00:20:27.075 Removing: /var/run/dpdk/spdk_pid78143 00:20:27.075 Removing: /var/run/dpdk/spdk_pid78905 00:20:27.075 Removing: /var/run/dpdk/spdk_pid79333 00:20:27.075 Removing: /var/run/dpdk/spdk_pid81298 00:20:27.075 Removing: /var/run/dpdk/spdk_pid81736 00:20:27.334 Removing: /var/run/dpdk/spdk_pid82176 00:20:27.334 Removing: /var/run/dpdk/spdk_pid84276 00:20:27.334 Removing: /var/run/dpdk/spdk_pid84756 00:20:27.334 Removing: /var/run/dpdk/spdk_pid85272 00:20:27.334 Removing: /var/run/dpdk/spdk_pid86329 00:20:27.334 Removing: /var/run/dpdk/spdk_pid86657 00:20:27.334 Removing: /var/run/dpdk/spdk_pid87591 00:20:27.334 Removing: /var/run/dpdk/spdk_pid87914 00:20:27.334 Removing: /var/run/dpdk/spdk_pid88841 00:20:27.334 Removing: /var/run/dpdk/spdk_pid89167 00:20:27.334 Removing: /var/run/dpdk/spdk_pid89839 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90118 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90181 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90223 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90470 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90645 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90742 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90842 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90895 00:20:27.334 Removing: /var/run/dpdk/spdk_pid90919 00:20:27.334 Clean 00:20:27.334 10:43:30 -- common/autotest_common.sh@1453 -- # return 0 00:20:27.334 10:43:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:27.334 10:43:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.334 10:43:30 -- common/autotest_common.sh@10 -- # set +x 00:20:27.334 10:43:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:27.334 10:43:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.334 10:43:30 -- common/autotest_common.sh@10 -- # set +x 00:20:27.335 10:43:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:27.594 10:43:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:27.594 10:43:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:27.594 10:43:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:27.594 10:43:30 -- spdk/autotest.sh@398 -- # hostname 00:20:27.594 10:43:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:27.594 geninfo: WARNING: invalid characters removed from testname! 00:20:49.544 10:43:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.484 10:43:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:52.418 10:43:55 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:54.328 10:43:57 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:56.235 10:43:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:58.141 10:44:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:00.049 10:44:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:00.049 10:44:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:00.049 10:44:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:00.049 10:44:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:00.049 10:44:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:00.049 10:44:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:00.049 + [[ -n 5426 ]] 00:21:00.049 + sudo kill 5426 00:21:00.060 [Pipeline] } 00:21:00.075 [Pipeline] // timeout 00:21:00.081 [Pipeline] } 00:21:00.096 [Pipeline] // stage 00:21:00.101 [Pipeline] } 00:21:00.115 [Pipeline] // catchError 00:21:00.125 [Pipeline] stage 00:21:00.127 [Pipeline] { (Stop VM) 00:21:00.140 [Pipeline] sh 00:21:00.423 + vagrant halt 00:21:02.970 ==> default: Halting domain... 00:21:11.121 [Pipeline] sh 00:21:11.404 + vagrant destroy -f 00:21:13.945 ==> default: Removing domain... 00:21:13.957 [Pipeline] sh 00:21:14.240 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:21:14.253 [Pipeline] } 00:21:14.268 [Pipeline] // stage 00:21:14.274 [Pipeline] } 00:21:14.288 [Pipeline] // dir 00:21:14.293 [Pipeline] } 00:21:14.309 [Pipeline] // wrap 00:21:14.315 [Pipeline] } 00:21:14.330 [Pipeline] // catchError 00:21:14.340 [Pipeline] stage 00:21:14.342 [Pipeline] { (Epilogue) 00:21:14.353 [Pipeline] sh 00:21:14.647 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:18.855 [Pipeline] catchError 00:21:18.857 [Pipeline] { 00:21:18.872 [Pipeline] sh 00:21:19.156 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:19.157 Artifacts sizes are good 00:21:19.166 [Pipeline] } 00:21:19.182 [Pipeline] // catchError 00:21:19.193 [Pipeline] archiveArtifacts 00:21:19.201 Archiving artifacts 00:21:19.299 [Pipeline] cleanWs 00:21:19.331 [WS-CLEANUP] Deleting project workspace... 00:21:19.331 [WS-CLEANUP] Deferred wipeout is used... 00:21:19.338 [WS-CLEANUP] done 00:21:19.340 [Pipeline] } 00:21:19.354 [Pipeline] // stage 00:21:19.359 [Pipeline] } 00:21:19.373 [Pipeline] // node 00:21:19.380 [Pipeline] End of Pipeline 00:21:19.416 Finished: SUCCESS